敵対的学習

Improving Robustness of ML Classifiers against Realizable Evasion Attacks Using Conserved Features

Authors: Liang Tong, Bo Li, Chen Hajaj, Chaowei Xiao, Ning Zhang, Yevgeniy Vorobeychik | Published: 2017-08-28 | Updated: 2019-05-10
モデル抽出攻撃
ロバスト性分析
敵対的学習

Cascade Adversarial Machine Learning Regularized with a Unified Embedding

Authors: Taesik Na, Jong Hwan Ko, Saibal Mukhopadhyay | Published: 2017-08-08 | Updated: 2018-03-17
ロバスト性分析
攻撃手法
敵対的学習

Adversarial-Playground: A Visualization Suite for Adversarial Sample Generation

Authors: Andrew Norton, Yanjun Qi | Published: 2017-06-06 | Updated: 2017-06-16
モデルの頑健性保証
攻撃タイプ
敵対的学習

Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation

Authors: Matthias Hein, Maksym Andriushchenko | Published: 2017-05-23 | Updated: 2017-11-05
モデルの頑健性保証
ロバスト性とプライバシーの関係
敵対的学習

Black-Box Attacks against RNN based Malware Detection Algorithms

Authors: Weiwei Hu, Ying Tan | Published: 2017-05-23
モデルの頑健性保証
攻撃タイプ
敵対的学習