モデルの頑健性保証

A3T: Adversarially Augmented Adversarial Training

Authors: Akram Erraqabi, Aristide Baratin, Yoshua Bengio, Simon Lacoste-Julien | Published: 2018-01-12
モデルの頑健性保証
ロバスト性向上手法
敵対的攻撃検出

Less is More: Culling the Training Set to Improve Robustness of Deep Neural Networks

Authors: Yongshuai Liu, Jiyu Chen, Hao Chen | Published: 2018-01-09 | Updated: 2018-12-08
モデルの頑健性保証
敵対的サンプル
敵対的攻撃検出

Adversarial Perturbation Intensity Achieving Chosen Intra-Technique Transferability Level for Logistic Regression

Authors: Martin Gubri | Published: 2018-01-06
モデルの頑健性保証
敵対的攻撃手法
機械学習アルゴリズム

The Robust Manifold Defense: Adversarial Training using Generative Models

Authors: Ajil Jalal, Andrew Ilyas, Constantinos Daskalakis, Alexandros G. Dimakis | Published: 2017-12-26 | Updated: 2019-07-10
モデルの頑健性保証
敵対的サンプルの検知
敵対的学習

Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models

Authors: Wieland Brendel, Jonas Rauber, Matthias Bethge | Published: 2017-12-12 | Updated: 2018-02-16
モデルの頑健性保証
敵対的学習
敵対的攻撃手法

CycleGAN, a Master of Steganography

Authors: Casey Chu, Andrey Zhmoginov, Mark Sandler | Published: 2017-12-08 | Updated: 2017-12-16
モデルの頑健性保証
情報隠蔽手法
生成的敵対ネットワーク

Generative Adversarial Perturbations

Authors: Omid Poursaeed, Isay Katsman, Bicheng Gao, Serge Belongie | Published: 2017-12-06 | Updated: 2018-07-06
モデルの頑健性保証
敵対的攻撃手法
生成的敵対ネットワーク

Where Classification Fails, Interpretation Rises

Authors: Chanh Nguyen, Georgi Georgiev, Yujie Ji, Ting Wang | Published: 2017-12-02
FDI攻撃検出手法
モデルの頑健性保証
敵対的学習

Evaluating Robustness of Neural Networks with Mixed Integer Programming

Authors: Vincent Tjeng, Kai Xiao, Russ Tedrake | Published: 2017-11-20 | Updated: 2019-02-18
モデルの頑健性保証
ロバスト性
深層学習技術

The best defense is a good offense: Countering black box attacks by predicting slightly wrong labels

Authors: Yannic Kilcher, Thomas Hofmann | Published: 2017-11-15
バックドアモデルの検知
プロアクティブ防御
モデルの頑健性保証