These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
In distributed learning settings, models are iteratively updated with shared
gradients computed from potentially sensitive user data. While previous work
has studied various privacy risks of sharing gradients, our paper aims to
provide a systematic approach to analyze private information leakage from
gradients. We present a unified game-based framework that encompasses a broad
range of attacks including attribute, property, distributional, and user
disclosures. We investigate how different uncertainties of the adversary affect
their inferential power via extensive experiments on five datasets across
various data modalities. Our results demonstrate the inefficacy of solely
relying on data aggregation to achieve privacy against inference attacks in
distributed learning. We further evaluate five types of defenses, namely,
gradient pruning, signed gradient descent, adversarial perturbations,
variational information bottleneck, and differential privacy, under both static
and adaptive adversary settings. We provide an information-theoretic view for
analyzing the effectiveness of these defenses against inference from gradients.
Finally, we introduce a method for auditing attribute inference privacy,
improving the empirical estimation of worst-case privacy through crafting
adversarial canary records.