AIセキュリティポータル K Program
Mitigating Noise Detriment in Differentially Private Federated Learning with Model Pre-training
Share
Abstract
Pre-training exploits public datasets to pre-train an advanced machine learning model, so that the model can be easily tuned to adapt to various downstream tasks. Pre-training has been extensively explored to mitigate computation and communication resource consumption. Inspired by these advantages, we are the first to explore how model pre-training can mitigate noise detriment in differentially private federated learning (DPFL). DPFL is upgraded from federated learning (FL), the de-facto standard for privacy preservation when training the model across multiple clients owning private data. DPFL introduces differentially private (DP) noises to obfuscate model gradients exposed in FL, which however can considerably impair model accuracy. In our work, we compare head fine-tuning (HT) and full fine-tuning (FT), which are based on pre-training, with scratch training (ST) in DPFL through a comprehensive empirical study. Our experiments tune pre-trained models (obtained by pre-training on ImageNet-1K) with CIFAR-10, CHMNIST and Fashion-MNIST (FMNIST) datasets, respectively. The results demonstrate that HT and FT can significantly mitigate noise influence by diminishing gradient exposure times. In particular, HT outperforms FT when the privacy budget is tight or the model size is large. Visualization and explanation study further substantiates our findings. Our pioneering study introduces a new perspective on enhancing DPFL and expanding its practical applications.
Deep learning with differential privacy
M. Abadi, A. Chu, I. Goodfellow, H. B. McMahan, I. Mironov, K. Talwar, L. Zhang
Published: 2016
A Generalized Shuffle Framework for Privacy Amplification: Strengthening Privacy Guarantees and Enhancing Utility
E. Chen, Y. Cao, Y. Ge
Published: 2024
On the Importance and Applicability of Pre-Training for Federated Learning
H.-Y. Chen, C.-H. Tu, Z. Li, H. W. Shen, W.-L. Chao
Published: 2023
A simple framework for contrastive learning of visual representations
T. Chen, S. Kornblith, M. Norouzi, G. Hinton
Published: 2020
Leveraging adversarial examples to quantify membership information leakage
G. Del Grosso, H. Jalalzai, G. Pichler, C. Palamidessi, P. Piantanida
Published: 2022
Decaf: A deep convolutional activation feature for generic visual recognition
J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, T. Darrell
Published: 2014
Why is public pretraining necessary for private model training?
A. Ganesh, M. Haghifam, M. Nasr, S. Oh, T. Steinke, O. Thakkar, A. G. Thakurta, L. Wang
Published: 2023
Similarity distribution based membership inference attack on person re-identification
J. Gao, X. Jiang, H. Zhang, Y. Yang, S. Dou, D. Li, D. Miao, C. Deng, C. Zhao
Published: 2023
Rethinking imagenet pre-training
K. He, R. Girshick, P. Dollar
Published: 2019
Deep residual learning for image recognition
Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun
Published: 2016
Using pre-training can improve model robustness and uncertainty
D. Hendrycks, K. Lee, M. Mazeika
Published: 2019
Source inference attacks: Beyond membership inference attacks in federated learning
H. Hu, X. Zhang, Z. Salcic, L. Sun, K.-K. R. Choo, G. Dobbie
Published: 2023
Holistic Artificial Intelligence
F. Junlan
Published: 2024
Multi-class texture analysis in colorectal cancer histology
J. N. Kather, C.-A. Weis, F. Bianconi, S. M. Melchers, L. R. Schad, T. Gaiser, A. Marx, F. G. Zollner
Published: 2016
Optimizing privacy-preserving outsourced convolutional neural network predictions
M. Li, S. S. M. Chow, S. Hu, Y. Yan, C. Shen, Q. Wang
Published: 2022
When Does Differentially Private Learning Not Suffer in High Dimensions?
Xuechen Li, Daogao Liu, Tatsunori Hashimoto, Huseyin A. Inan, Janardhan Kulkarni, Yin Tat Lee, Abhradeep Guha Thakurta
Published: 2022.7.1
Please tell me more: Privacy impact of explainability through the lens of membership inference attack
Han Liu, Yuhao Wu, Zhiyuan Yu, Ning Zhang
Published: 2024
An Optimized Sparse Response Mechanism for Differentially Private Federated Learning
J. Ma, Y. Zhou, L. Cui, S. Guo
Published: 2024
Communication-Efficient Learning of Deep Networks from Decentralized Data
H. Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, Blaise Agüera y Arcas
Published: 2016.2.18
Tight auditing of differentially private machine learning
Milad Nasr, Jamie Hayes, Thomas Steinke, Borja Balle, Florian Tramèr, Matthew Jagielski, Nicholas Carlini, Andreas Terzis
Published: 2023
Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-box Inference Attacks against Centralized and Federated Learning
Milad Nasr, Reza Shokri, Amir Houmansadr
Published: 2018.12.4
Imagenet large scale visual recognition challenge
Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.
Published: 2015
Federated Learning from Pre-Trained Models: A Contrastive Learning Approach
Yue Tan, Guodong Long, Jie Ma, Lu Liu, Tianyi Zhou, Jing Jiang
Published: 2022.9.21
ldp-fed: Federated learning with local differential privacy
Stacey Truex, Ling Liu, Ka-Ho Chow, Mehmet Emre Gursoy, Wenqi Wei
Published: 2020
Tackling the objective inconsistency problem in heterogeneous federated optimization
Wang, J., Liu, Q., Liang, H., Joshi, G., Poor, H. V.
Published: 2020
User-level privacy-preserving federated learning: Analysis and performance optimization
Kang Wei, Jun Li, Ming Ding, Chuan Ma, Hang Su, Bo Zhang, H Vincent Poor
Published: 2021
How transferable are features in deep neural networks?
J. Yosinski, J. Clune, Y. Bengio, H. Lipson
Published: 2014
CMI: Client-Targeted Membership Inference in Federated Learning
T. Zheng, B. Li
Published: 2024
A Differentially Private Federated Learning Model Against Poisoning Attacks in Edge Computing
J. Zhou, N. Wu, Y. Wang, S. Gu, Z. Cao, X. Dong, K.-K. R. Choo
Published: 2023
Optimizing the numbers of queries and replies in convex federated learning with differential privacy
Y. Zhou, X. Liu, Y. Fu, D. Wu, J. H. Wang, S. Yu
Published: 2023
Share