TOP Literature Database How Does a Deep Learning Model Architecture Impact Its Privacy? A Comprehensive Study of Privacy Attacks on CNNs and Transformers
How Does a Deep Learning Model Architecture Impact Its Privacy? A Comprehensive Study of Privacy Attacks on CNNs and Transformers
AI Security Portal bot
Information in the literature database is collected automatically.
These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
As a booming research area in the past decade, deep learning technologies
have been driven by big data collected and processed on an unprecedented scale.
However, privacy concerns arise due to the potential leakage of sensitive
information from the training data. Recent research has revealed that deep
learning models are vulnerable to various privacy attacks, including membership
inference attacks, attribute inference attacks, and gradient inversion attacks.
Notably, the efficacy of these attacks varies from model to model. In this
paper, we answer a fundamental question: Does model architecture affect model
privacy? By investigating representative model architectures from convolutional
neural networks (CNNs) to Transformers, we demonstrate that Transformers
generally exhibit higher vulnerability to privacy attacks than CNNs.
Additionally, we identify the micro design of activation layers, stem layers,
and LN layers, as major factors contributing to the resilience of CNNs against
privacy attacks, while the presence of attention modules is another main factor
that exacerbates the privacy vulnerability of Transformers. Our discovery
reveals valuable insights for deep learning models to defend against privacy
attacks and inspires the research community to develop privacy-friendly model
architectures.