敵対的訓練

Adversarial Training is a Form of Data-dependent Operator Norm Regularization

Authors: Kevin Roth, Yannic Kilcher, Thomas Hofmann | Published: 2019-06-04 | Updated: 2020-10-23
敵対的訓練
深層学習技術
防御メカニズム

Simple Black-box Adversarial Attacks

Authors: Chuan Guo, Jacob R. Gardner, Yurong You, Andrew Gordon Wilson, Kilian Q. Weinberger | Published: 2019-05-17 | Updated: 2019-08-15
クエリ生成手法
性能評価手法
敵対的訓練

On Norm-Agnostic Robustness of Adversarial Training

Authors: Bai Li, Changyou Chen, Wenlin Wang, Lawrence Carin | Published: 2019-05-15
ポイズニング
敵対的サンプル
敵対的訓練

Beyond Explainability: Leveraging Interpretability for Improved Adversarial Learning

Authors: Devinder Kumar, Ibrahim Ben-Daya, Kanav Vats, Jeffery Feng, Graham Taylor and, Alexander Wong | Published: 2019-04-21
攻撃の評価
敵対的訓練
機械学習技術

Adversarial Out-domain Examples for Generative Models

Authors: Dario Pasquini, Marco Mingione, Massimo Bernaschi | Published: 2019-03-07 | Updated: 2019-05-13
Out-of-Distribution検出
敵対的学習
敵対的訓練

GanDef: A GAN based Adversarial Training Defense for Neural Network Classifier

Authors: Guanxiong Liu, Issa Khalil, Abdallah Khreishah | Published: 2019-03-06
モデルの頑健性保証
敵対的学習
敵対的訓練

Excessive Invariance Causes Adversarial Vulnerability

Authors: Jörn-Henrik Jacobsen, Jens Behrmann, Richard Zemel, Matthias Bethge | Published: 2018-11-01 | Updated: 2020-07-12
モデルインバージョン
敵対的サンプル
敵対的訓練

Logit Pairing Methods Can Fool Gradient-Based Attacks

Authors: Marius Mosbach, Maksym Andriushchenko, Thomas Trost, Matthias Hein, Dietrich Klakow | Published: 2018-10-29 | Updated: 2019-03-12
ロバスト性の要件
敵対的学習
敵対的訓練

Rademacher Complexity for Adversarially Robust Generalization

Authors: Dong Yin, Kannan Ramchandran, Peter Bartlett | Published: 2018-10-29 | Updated: 2020-07-29
モデルの頑健性保証
ロバスト性の要件
敵対的訓練

Detection based Defense against Adversarial Examples from the Steganalysis Point of View

Authors: Jiayang Liu, Weiming Zhang, Yiwei Zhang, Dongdong Hou, Yujia Liu, Hongyue Zha, Nenghai Yu | Published: 2018-06-21 | Updated: 2018-12-24
サイバーセキュリティ
敵対的サンプルの検知
敵対的訓練