These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
By enabling multiple agents to cooperatively solve a global optimization
problem in the absence of a central coordinator, decentralized stochastic
optimization is gaining increasing attention in areas as diverse as machine
learning, control, and sensor networks. Since the associated data usually
contain sensitive information, such as user locations and personal identities,
privacy protection has emerged as a crucial need in the implementation of
decentralized stochastic optimization. In this paper, we propose a
decentralized stochastic optimization algorithm that is able to guarantee
provable convergence accuracy even in the presence of aggressive quantization
errors that are proportional to the amplitude of quantization inputs. The
result applies to both convex and non-convex objective functions, and enables
us to exploit aggressive quantization schemes to obfuscate shared information,
and hence enables privacy protection without losing provable optimization
accuracy. In fact, by using a {stochastic} ternary quantization scheme, which
quantizes any value to three numerical levels, we achieve quantization-based
rigorous differential privacy in decentralized stochastic optimization, which
has not been reported before. In combination with the presented quantization
scheme, the proposed algorithm ensures, for the first time, rigorous
differential privacy in decentralized stochastic optimization without losing
provable convergence accuracy. Simulation results for a distributed estimation
problem as well as numerical experiments for decentralized learning on a
benchmark machine learning dataset confirm the effectiveness of the proposed
approach.