AIセキュリティマップにマッピングされた情報システム的側面における負の影響「AIが誤分類を引き起こし、機能やサービスの質が低下」をもたらす攻撃・要因、それに対する防御手法・対策、および対象のAI技術・タスク・データを示しています。また、関連する外部作用的側面の要素も示しています。
攻撃・要因
防御手法・対策
対象のAI技術
- DNN
- CNN
- LLM
- 対照学習
- FSL
- GNN
- 連合学習
- LSTM
- RNN
タスク
- 分類
対象のデータ
- 画像
- グラフ
- テキスト
- 音声
関連する外部作用的側面
参考文献
敵対的サンプル
- Intriguing properties of neural networks, 2014
- Explaining and Harnessing Adversarial Examples, 2015
- The limitations of deep learning in adversarial settings, 2015
- Adversarial Examples in the Physical World, 2017
- Towards Evaluating the Robustness of Neural Networks, 2017
- Towards Deep Learning Models Resistant to Adversarial Attacks, 2018
- A Closer Look at Deep Learning Heuristics: Learning Rate Restarts, Warmup and Decay, 2020
敵対的学習
- Intriguing properties of neural networks, 2013
- Explaining and Harnessing Adversarial Examples, 2014
- Learning with a Strong Adversary, 2015
- Adversarial Examples: Attacks and Defenses for Deep Learning, 2017
- Towards Deep Learning Models Resistant to Adversarial Attacks, 2018
- Adversarial Training for Free!, 2019
- Adversarial Robustness Against the Union of Multiple Perturbation Models, 2019
- Bag of Tricks for Adversarial Training, 2020
- Smooth Adversarial Training, 2020
敵対的サンプルの検知
- Adversarial Examples Detection in Deep Networks with Convolutional Filter Statistics, 2017
- On the (Statistical) Detection of Adversarial Examples, 2017
- On Detecting Adversarial Perturbations, 2017
- MagNet: a Two-Pronged Defense against Adversarial Examples, 2017
- Detecting Adversarial Image Examples in Deep Networks with Adaptive Noise Reduction, 2021
- Detecting Adversarial Examples from Sensitivity Inconsistency of Spatial-Transform Domain, 2021
- Adversarial Example Detection for DNN Models: A Review and Experimental Comparison, 2022
- Detecting Adversarial Examples Is (Nearly) As Hard As Classifying Them, 2022
モデルの頑健性保証
- Explaining and Harnessing Adversarial Examples, 2015
- Towards Deep Neural Network Architectures Robust to Adversarial Examples, 2015
- Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks, 2016
- Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks, 2017
- Towards Deep Learning Models Resistant to Adversarial Attacks, 2018
- Ensemble Adversarial Training: Attacks and Defenses, 2018
- Provable defenses against adversarial examples via the convex outer adversarial polytope, 2018
- On Evaluating Adversarial Robustness, 2019
- Evaluating Robustness of Neural Networks with Mixed Integer Programming, 2019