モデルの頑健性保証

Adversarial-Playground: A Visualization Suite for Adversarial Sample Generation

Authors: Andrew Norton, Yanjun Qi | Published: 2017-06-06 | Updated: 2017-06-16
モデルの頑健性保証
攻撃タイプ
敵対的学習

Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation

Authors: Matthias Hein, Maksym Andriushchenko | Published: 2017-05-23 | Updated: 2017-11-05
モデルの頑健性保証
ロバスト性とプライバシーの関係
敵対的学習

Black-Box Attacks against RNN based Malware Detection Algorithms

Authors: Weiwei Hu, Ying Tan | Published: 2017-05-23
モデルの頑健性保証
攻撃タイプ
敵対的学習

Ensemble Adversarial Training: Attacks and Defenses

Authors: Florian Tramèr, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow, Dan Boneh, Patrick McDaniel | Published: 2017-05-19 | Updated: 2020-04-26
モデルの頑健性保証
モデル抽出攻撃
深層学習

Delving into adversarial attacks on deep policies

Authors: Jernej Kos, Dawn Song | Published: 2017-05-18
モデルの頑健性保証
ロバスト性
防御手法

Keeping the Bad Guys Out: Protecting and Vaccinating Deep Learning with JPEG Compression

Authors: Nilaksh Das, Madhuri Shanbhogue, Shang-Tse Chen, Fred Hohman, Li Chen, Michael E. Kounavis, Duen Horng Chau | Published: 2017-05-08
モデルの堅牢性
モデルの頑健性保証
防御メカニズム

Universal Adversarial Perturbations Against Semantic Image Segmentation

Authors: Jan Hendrik Metzen, Mummadi Chaithanya Kumar, Thomas Brox, Volker Fischer | Published: 2017-04-19 | Updated: 2017-07-31
セマンティックセグメンテーション攻撃
モデルの頑健性保証
敵対的サンプルの検知

Enhancing Robustness of Machine Learning Systems via Data Transformations

Authors: Arjun Nitin Bhagoji, Daniel Cullina, Chawin Sitawarin, Prateek Mittal | Published: 2017-04-09 | Updated: 2017-11-29
モデルの頑健性保証
モデル抽出攻撃
防御効果分析

Adequacy of the Gradient-Descent Method for Classifier Evasion Attacks

Authors: Yi Han, Benjamin I. P. Rubinstein | Published: 2017-04-06 | Updated: 2017-05-25
ポイズニング
モデルの頑健性保証
対抗的学習

Comment on “Biologically inspired protection of deep networks from adversarial attacks”

Authors: Wieland Brendel, Matthias Bethge | Published: 2017-04-05
トリガーの検知
モデルの頑健性保証
対抗的学習