This page provides the security targets of negative impacts “consumers accidentally inputting their personal information into generative AI or similar systems” in the external influence aspect in the AI Security Map, as well as the attacks and factors that cause them, and the corresponding defense methods and countermeasures.
Security target
- Consumer
Attack or cause
- Degradation of transparency
- Social engineering attack
Defensive method or countermeasure
- Anonymization technology
- Differential privacy
- Federated learning
- Machine unlearning
- Encryption technology
References
Social engineering attack
Anonymization technology
Differential privacy
- Deep Learning with Differential Privacy, 2016
- Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data, 2017
- Learning Differentially Private Recurrent Language Models, 2018
- Efficient Deep Learning on Multi-Source Private Data, 2018
- Evaluating Differentially Private Machine Learning in Practice, 2019
- Tempered Sigmoid Activations for Deep Learning with Differential Privacy, 2020
Federated learning
- Practical Secure Aggregation for Federated Learning on User-Held Data, 2016
- Communication-Efficient Learning of Deep Networks from Decentralized Data, 2017
- Federated Learning: Strategies for Improving Communication Efficiency, 2018
- Federated Optimization in Heterogeneous Networks, 2020
- SCAFFOLD: Stochastic Controlled Averaging for Federated Learning, 2020
- Federated Learning with Matched Averaging, 2020
Machine unlearning
- Making AI Forget You: Data Deletion in Machine Learning, 2019
- Eternal Sunshine of the Spotless Net: Selective Forgetting in Deep Networks, 2020
- Certified Data Removal from Machine Learning Models, 2020
- Forgetting Outside the Box: Scrubbing Deep Networks of Information Accessible from Input-Output Observations, 2020
- Approximate Data Deletion from Machine Learning Models, 2021
- Fast Yet Effective Machine Unlearning, 2021
- Machine Unlearning for Random Forests, 2021
- Machine Unlearning of Features and Labels, 2023
Encryption technology
- Gazelle: A Low Latency Framework for Secure Neural Network Inference, 2018
- Faster CryptoNets: Leveraging Sparsity for Real-World Encrypted Inference, 2018
- nGraph-HE2: A High-Throughput Framework for Neural Network Inference on Encrypted Data, 2019
- Privacy-Preserving Machine Learning with Fully Homomorphic Encryption for Deep Neural Network, 2021