These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Federated learning (FL) aims to perform privacy-preserving machine learning
on distributed data held by multiple data owners. To this end, FL requires the
data owners to perform training locally and share the gradient updates (instead
of the private inputs) with the central server, which are then securely
aggregated over multiple data owners. Although aggregation by itself does not
provably offer privacy protection, prior work showed that it may suffice if the
batch size is sufficiently large. In this paper, we propose the Cocktail Party
Attack (CPA) that, contrary to prior belief, is able to recover the private
inputs from gradients aggregated over a very large batch size. CPA leverages
the crucial insight that aggregate gradients from a fully connected layer is a
linear combination of its inputs, which leads us to frame gradient inversion as
a blind source separation (BSS) problem (informally called the cocktail party
problem). We adapt independent component analysis (ICA)--a classic solution to
the BSS problem--to recover private inputs for fully-connected and
convolutional networks, and show that CPA significantly outperforms prior
gradient inversion attacks, scales to ImageNet-sized inputs, and works on large
batch sizes of up to 1024.