Federated learning (FL) empowers distributed clients to collaboratively train
a shared machine learning model through exchanging parameter information.
Despite the fact that FL can protect clients' raw data, malicious users can
still crack original data with disclosed parameters. To amend this flaw,
differential privacy (DP) is incorporated into FL clients to disturb original
parameters, which however can significantly impair the accuracy of the trained
model. In this work, we study a crucial question which has been vastly
overlooked by existing works: what are the optimal numbers of queries and
replies in FL with DP so that the final model accuracy is maximized. In FL, the
parameter server (PS) needs to query participating clients for multiple global
iterations to complete training. Each client responds a query from the PS by
conducting a local iteration. Our work investigates how many times the PS
should query clients and how many times each client should reply the PS. We
investigate two most extensively used DP mechanisms (i.e., the Laplace
mechanism and Gaussian mechanisms). Through conducting convergence rate
analysis, we can determine the optimal numbers of queries and replies in FL
with DP so that the final model accuracy can be maximized. Finally, extensive
experiments are conducted with publicly available datasets: MNIST and FEMNIST,
to verify our analysis and the results demonstrate that properly setting the
numbers of queries and replies can significantly improve the final model
accuracy in FL with DP.