This page provides the attacks and factors that have a negative impact “Training data leakage” in the information systems aspect in the AI Security Map, the defense methods and countermeasures against them, as well as the relevant AI technologies, tasks, and data. It also indicates related elements in the external influence aspect.
Attack or cause
Defensive method or countermeasure
- Differential privacy
- Encryption technology
- AI access control
Targeted AI technology
- DNN
- CNN
- GNN
- GAN
- Diffusion model
- Federated learning
- LLM
Task
- Classification
- Generation
Data
- Image
- Graph
- Text
- Audio
Related external influence aspect
- Privacy
- Copyright and authorship
- Reputation
- Psychological impact
- Compliance with laws and regulations
References
Membership inference attack
- Membership Inference Attacks Against Machine Learning Models, 2017
- Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting, 2017
- ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models, 2018
- GAN-Leaks: A Taxonomy of Membership Inference Attacks against Generative Models, 2019
- Systematic Evaluation of Privacy Risks of Machine Learning Models, 2020
- Information Leakage in Embedding Models, 2020
- Membership leakage in label-only exposures, 2020
- Label-Only Membership Inference Attacks, 2020
Differential privacy
- Deep Learning with Differential Privacy, 2016
- Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data, 2017
- Learning Differentially Private Recurrent Language Models, 2018
- Efficient Deep Learning on Multi-Source Private Data, 2018
- Evaluating Differentially Private Machine Learning in Practice, 2019
- Tempered Sigmoid Activations for Deep Learning with Differential Privacy, 2020
Encryption technology
- Gazelle: A Low Latency Framework for Secure Neural Network Inference, 2018
- Faster CryptoNets: Leveraging Sparsity for Real-World Encrypted Inference, 2018
- nGraph-HE2: A High-Throughput Framework for Neural Network Inference on Encrypted Data, 2019
- Privacy-Preserving Machine Learning with Fully Homomorphic Encryption for Deep Neural Network, 2021