AIセキュリティマップにマッピングされた外部作用的側面における負の影響「消費者が生成AIなどに自身の個人情報を誤って入力」のセキュリティ対象、それをもたらす攻撃・要因、および防御手法・対策を示しています。
セキュリティ対象
- 消費者
攻撃・要因
- 透明性の毀損
- ソーシャルエンジニアリング攻撃
防御手法・対策
参考文献
ソーシャルエンジニアリング攻撃
匿名化技術
差分プライバシー
- Deep Learning with Differential Privacy, 2016
- Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data, 2017
- Learning Differentially Private Recurrent Language Models, 2018
- Efficient Deep Learning on Multi-Source Private Data, 2018
- Evaluating Differentially Private Machine Learning in Practice, 2019
- Tempered Sigmoid Activations for Deep Learning with Differential Privacy, 2020
連合学習
- Practical Secure Aggregation for Federated Learning on User-Held Data, 2016
- Communication-Efficient Learning of Deep Networks from Decentralized Data, 2017
- Federated Learning: Strategies for Improving Communication Efficiency, 2018
- Federated Optimization in Heterogeneous Networks, 2020
- SCAFFOLD: Stochastic Controlled Averaging for Federated Learning, 2020
- Federated Learning with Matched Averaging, 2020
マシン・アンラーニング
- Making AI Forget You: Data Deletion in Machine Learning, 2019
- Eternal Sunshine of the Spotless Net: Selective Forgetting in Deep Networks, 2020
- Certified Data Removal from Machine Learning Models, 2020
- Descent-to-Delete: Gradient-Based Methods for Machine Unlearning, 2020
- Forgetting Outside the Box: Scrubbing Deep Networks of Information Accessible from Input-Output Observations, 2020
- Approximate Data Deletion from Machine Learning Models, 2021
- Fast Yet Effective Machine Unlearning, 2021
- Machine Unlearning for Random Forests, 2021
暗号化技術
- Gazelle: A Low Latency Framework for Secure Neural Network Inference, 2018
- Faster CryptoNets: Leveraging Sparsity for Real-World Encrypted Inference, 2018
- nGraph-HE2: A High-Throughput Framework for Neural Network Inference on Encrypted Data, 2019
- Privacy-Preserving Machine Learning with Fully Homomorphic Encryption for Deep Neural Network, 2021