This page provides the attacks and factors that have a negative impact “Reconstruction of training data” in the information systems aspect in the AI Security Map, the defense methods and countermeasures against them, as well as the relevant AI technologies, tasks, and data. It also indicates related elements in the external influence aspect.
Attack or cause
Defensive method or countermeasure
- Differential privacy
- Encryption technology
- AI access control
Targeted AI technology
- DNN
- CNN
- Contrastive learning
- FSL
- GNN
- Federated learning
- LSTM
- RNN
- Diffusion model
Task
- Classification
- Generation
Data
- Image
- Graph
- Audio
Related external influence aspect
- Privacy
- Copyright and authorship
- Reputation
- Psychological impact
- Compliance with laws and regulations
References
Model inversion attack
- The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks, 2019
- Information leakage in embedding models, 2020
- Exploiting Explanations for Model Inversion Attacks, 2021
- Stealing Links from Graph Neural Networks, 2021
- Inference Attacks Against Graph Neural Networks, 2022
- Text Embeddings Reveal (Almost) As Much As Text, 2023
- Language Model Inversion, 2024
Differential privacy
- Deep Learning with Differential Privacy, 2016
- Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data, 2017
- Learning Differentially Private Recurrent Language Models, 2018
- Efficient Deep Learning on Multi-Source Private Data, 2018
- Evaluating Differentially Private Machine Learning in Practice, 2019
- Tempered Sigmoid Activations for Deep Learning with Differential Privacy, 2020
Encryption technology
- Gazelle: A Low Latency Framework for Secure Neural Network Inference, 2018
- Faster CryptoNets: Leveraging Sparsity for Real-World Encrypted Inference, 2018
- nGraph-HE2: A High-Throughput Framework for Neural Network Inference on Encrypted Data, 2019
- Privacy-Preserving Machine Learning with Fully Homomorphic Encryption for Deep Neural Network, 2021