These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Decentralized optimization is increasingly popular in machine learning for
its scalability and efficiency. Intuitively, it should also provide better
privacy guarantees, as nodes only observe the messages sent by their neighbors
in the network graph. But formalizing and quantifying this gain is challenging:
existing results are typically limited to Local Differential Privacy (LDP)
guarantees that overlook the advantages of decentralization. In this work, we
introduce pairwise network differential privacy, a relaxation of LDP that
captures the fact that the privacy leakage from a node $u$ to a node $v$ may
depend on their relative position in the graph. We then analyze the combination
of local noise injection with (simple or randomized) gossip averaging protocols
on fixed and random communication graphs. We also derive a differentially
private decentralized optimization algorithm that alternates between local
gradient descent steps and gossip averaging. Our results show that our
algorithms amplify privacy guarantees as a function of the distance between
nodes in the graph, matching the privacy-utility trade-off of the trusted
curator, up to factors that explicitly depend on the graph topology. Finally,
we illustrate our privacy gains with experiments on synthetic and real-world
datasets.