These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Federated learning (FL) provides a variety of privacy advantages by allowing
clients to collaboratively train a model without sharing their private data.
However, recent studies have shown that private information can still be leaked
through shared gradients. To further minimize the risk of privacy leakage,
existing defenses usually require clients to locally modify their gradients
(e.g., differential privacy) prior to sharing with the server. While these
approaches are effective in certain cases, they regard the entire data as a
single entity to protect, which usually comes at a large cost in model utility.
In this paper, we seek to reconcile utility and privacy in FL by proposing a
user-configurable privacy defense, RecUP-FL, that can better focus on the
user-specified sensitive attributes while obtaining significant improvements in
utility over traditional defenses. Moreover, we observe that existing inference
attacks often rely on a machine learning model to extract the private
information (e.g., attributes). We thus formulate such a privacy defense as an
adversarial learning problem, where RecUP-FL generates slight perturbations
that can be added to the gradients before sharing to fool adversary models. To
improve the transferability to un-queryable black-box adversary models,
inspired by the idea of meta-learning, RecUP-FL forms a model zoo containing a
set of substitute models and iteratively alternates between simulations of the
white-box and the black-box adversarial attack scenarios to generate
perturbations. Extensive experiments on four datasets under various adversarial
settings (both attribute inference attack and data reconstruction attack) show
that RecUP-FL can meet user-specified privacy constraints over the sensitive
attributes while significantly improving the model utility compared with
state-of-the-art privacy defenses.