AIセキュリティポータル K Program
Differentially Private Federated Learning: A Systematic Review
Share
Abstract
In recent years, privacy and security concerns in machine learning have promoted trusted federated learning to the forefront of research. Differential privacy has emerged as the de facto standard for privacy protection in federated learning due to its rigorous mathematical foundation and provable guarantee. Despite extensive research on algorithms that incorporate differential privacy within federated learning, there remains an evident deficiency in systematic reviews that categorize and synthesize these studies. Our work presents a systematic overview of the differentially private federated learning. Existing taxonomies have not adequately considered objects and level of privacy protection provided by various differential privacy models in federated learning. To rectify this gap, we propose a new taxonomy of differentially private federated learning based on definition and guarantee of various differential privacy models and federated scenarios. Our classification allows for a clear delineation of the protected objects across various differential privacy models and their respective neighborhood levels within federated learning environments. Furthermore, we explore the applications of differential privacy in federated learning scenarios. Our work provide valuable insights into privacy-preserving federated learning and suggest practical directions for future research.
Deep learning with differential privacy
Martin Abadi, Andy Chu, Ian Goodfellow, H Brendan McMahan, Ilya Mironov, Kunal Talwar, Li Zhang
Published: 2016
Little is Enough: Boosting Privacy by Sharing Only Hard Labels in Federated Semi-Supervised Learning
Amr Abourayya, Jens Kleesiek, Kanishka Rao, Erman Ayday, Bharat Rao, Geoffrey I Webb, Michael Kamp
Published: 2025
cpsgd: Communication-efficient and differentially-private distributed SGD
Naman Agarwal, Ananda Theertha Suresh, Felix X. Yu, Sanjiv Kumar, Brendan McMahan
Published: 2018
Differentially private learning with adaptive clipping
Galen Andrew, Om Thakkar, Brendan McMahan, Swaroop Ramaswamy
Published: 2021
Hypothesis testing interpretations and renyi differential privacy
Borja Balle, Gilles Barthe, Marco Gaboardi, Justin Hsu, Tetsuya Sato
Published: 2020
The privacy blanket of the shuffle model
Borja Balle, James Bell, Adrià Gascón, Kobbi Nissim
Published: 2019
Unlocking the Power of Differentially Private Zeroth-order Optimization for Fine-tuning {LLMs}
Ergute Bao, Yangfan Jiang, Fei Wei, Xiaokui Xiao, Zitao Li, Yaliang Li, Bolin Ding
Published: 2025
Prochlo: Strong privacy for analytics in the crowd
Andrea Bittau, Ulfar Erlingsson, Petros Maniatis, Ilya Mironov, Ananth Raghunathan, David Lie, Mitch Rudominer, Ushasree Kode, Julien Tinnes, Bernhard Seefeld
Published: 2017
Share