Federated learning aims to protect data privacy by collaboratively learning a
model without sharing private data among users. However, an adversary may still
be able to infer the private training data by attacking the released model.
Differential privacy provides a statistical protection against such attacks at
the price of significantly degrading the accuracy or utility of the trained
models. In this paper, we investigate a utility enhancement scheme based on
Laplacian smoothing for differentially private federated learning (DP-Fed-LS),
where the parameter aggregation with injected Gaussian noise is improved in
statistical precision without losing privacy budget. Our key observation is
that the aggregated gradients in federated learning often enjoy a type of
smoothness, i.e. sparsity in the graph Fourier basis with polynomial decays of
Fourier coefficients as frequency grows, which can be exploited by the
Laplacian smoothing efficiently. Under a prescribed differential privacy
budget, convergence error bounds with tight rates are provided for DP-Fed-LS
with uniform subsampling of heterogeneous Non-IID data, revealing possible
utility improvement of Laplacian smoothing in effective dimensionality and
variance reduction, among others. Experiments over MNIST, SVHN, and Shakespeare
datasets show that the proposed method can improve model accuracy with
DP-guarantee and membership privacy under both uniform and Poisson subsampling
mechanisms.