These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Decentralized learning enables distributed agents to collaboratively train a
shared machine learning model without a central server, through local
computation and peer-to-peer communication. Although each agent retains its
dataset locally, sharing local models can still expose private information
about the local training datasets to adversaries. To mitigate privacy attacks,
a common strategy is to inject random artificial noise at each agent before
exchanging local models between neighbors. However, this often leads to utility
degradation due to the negative effects of cumulated artificial noise on the
learning algorithm. In this work, we introduce CorN-DSGD, a novel
covariance-based framework for generating correlated privacy noise across
agents, which unifies several state-of-the-art methods as special cases. By
leveraging network topology and mixing weights, CorN-DSGD optimizes the noise
covariance to achieve network-wide noise cancellation. Experimental results
show that CorN-DSGD cancels more noise than existing pairwise correlation
schemes, improving model performance under formal privacy guarantees.