敵対的訓練

TextDecepter: Hard Label Black Box Attack on Text Classifiers

Authors: Sachin Saxena | Published: 2020-08-16 | Updated: 2020-12-28
テキスト分類の応用
敵対的サンプル
敵対的訓練

Learning to Learn from Mistakes: Robust Optimization for Adversarial Noise

Authors: Alex Serban, Erik Poll, Joost Visser | Published: 2020-08-12
敵対的サンプル
敵対的訓練
最適化問題

Evaluation of Adversarial Training on Different Types of Neural Networks in Deep Learning-based IDSs

Authors: Rana Abou Khamis, Ashraf Matrawy | Published: 2020-07-08
ポイズニング
性能低下の要因
敵対的訓練

On the transferability of adversarial examples between convex and 01 loss models

Authors: Yunzhe Xue, Meiyan Xie, Usman Roshan | Published: 2020-06-14 | Updated: 2020-07-29
アルゴリズム設計
敵対的サンプル
敵対的訓練

Spanning Attack: Reinforce Black-box Attacks with Unlabeled Data

Authors: Lu Wang, Huan Zhang, Jinfeng Yi, Cho-Jui Hsieh, Yuan Jiang | Published: 2020-05-11 | Updated: 2020-11-10
アルゴリズム
攻撃検出
敵対的訓練

Towards Robustness against Unsuspicious Adversarial Examples

Authors: Liang Tong, Minzhe Guo, Atul Prakash, Yevgeniy Vorobeychik | Published: 2020-05-08 | Updated: 2020-10-08
ロバスト性向上手法
敵対的サンプル
敵対的訓練

Revisiting Ensembles in an Adversarial Context: Improving Natural Accuracy

Authors: Aditya Saligrama, Guillaume Leclerc | Published: 2020-02-26
ロバスト性評価
性能評価
敵対的訓練

Gödel’s Sentence Is An Adversarial Example But Unsolvable

Authors: Xiaodong Qi, Lansheng Han | Published: 2020-02-25
敵対的サンプル
敵対的訓練
脆弱性予測

HYDRA: Pruning Adversarially Robust Neural Networks

Authors: Vikash Sehwag, Shiqi Wang, Prateek Mittal, Suman Jana | Published: 2020-02-24 | Updated: 2020-11-10
ロバスト性評価
敵対的訓練
最適化問題

Non-Intrusive Detection of Adversarial Deep Learning Attacks via Observer Networks

Authors: Kirthi Shankar Sivamani, Rajeev Sahay, Aly El Gamal | Published: 2020-02-22
性能評価
敵対的訓練
防御手法