These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Federated Learning (FL) is a distributed machine learning technique that
allows model training among multiple devices or organizations by sharing
training parameters instead of raw data. However, adversaries can still infer
individual information through inference attacks (e.g. differential attacks) on
these training parameters. As a result, Differential Privacy (DP) has been
widely used in FL to prevent such attacks.
We consider differentially private federated learning in a
resource-constrained scenario, where both privacy budget and communication
rounds are constrained. By theoretically analyzing the convergence, we can find
the optimal number of local DPSGD iterations for clients between any two
sequential global updates. Based on this, we design an algorithm of
Differentially Private Federated Learning with Adaptive Local Iterations
(ALI-DPFL). We experiment our algorithm on the MNIST, FashionMNIST and Cifar10
datasets, and demonstrate significantly better performances than previous work
in the resource-constraint scenario. Code is available at
https://github.com/cheng-t/ALI-DPFL.