AIセキュリティマップにマッピングされた情報システム的側面における負の影響「学習データの漏洩」をもたらす攻撃・要因、それに対する防御手法・対策、および対象のAI技術・タスク・データを示しています。また、関連する外部作用的側面の要素も示しています。
攻撃・要因
防御手法・対策
対象のAI技術
- DNN
- CNN
- GNN
- GAN
- Diffusion model
- 連合学習
- LLM
タスク
- 分類
- 生成
対象のデータ
- 画像
- グラフ
- テキスト
- 音声
関連する外部作用的側面
参考文献
メンバーシップ推論
- Membership Inference Attacks Against Machine Learning Models, 2017
- Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting, 2017
- ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models, 2018
- GAN-Leaks: A Taxonomy of Membership Inference Attacks against Generative Models, 2019
- Systematic Evaluation of Privacy Risks of Machine Learning Models, 2020
- Information Leakage in Embedding Models, 2020
- Membership leakage in label-only exposures, 2020
- Label-Only Membership Inference Attacks, 2020
差分プライバシー
- Deep Learning with Differential Privacy, 2016
- Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data, 2017
- Learning Differentially Private Recurrent Language Models, 2018
- Efficient Deep Learning on Multi-Source Private Data, 2018
- Evaluating Differentially Private Machine Learning in Practice, 2019
- Tempered Sigmoid Activations for Deep Learning with Differential Privacy, 2020
暗号化技術
- Gazelle: A Low Latency Framework for Secure Neural Network Inference, 2018
- Faster CryptoNets: Leveraging Sparsity for Real-World Encrypted Inference, 2018
- nGraph-HE2: A High-Throughput Framework for Neural Network Inference on Encrypted Data, 2019
- Privacy-Preserving Machine Learning with Fully Homomorphic Encryption for Deep Neural Network, 2021