敵対的学習

Blind Pre-Processing: A Robust Defense Method Against Adversarial Examples

Authors: Adnan Siraj Rakin, Zhezhi He, Boqing Gong, Deliang Fan | Published: 2018-02-05 | Updated: 2018-02-07
データ前処理
モデルの頑健性保証
敵対的学習

Sparsity-based Defense against Adversarial Attacks on Linear Classifiers

Authors: Zhinus Marzi, Soorya Gopalakrishnan, Upamanyu Madhow, Ramtin Pedarsani | Published: 2018-01-15 | Updated: 2018-06-19
スパース性防御
敵対的学習
敵対的攻撃

Spatially Transformed Adversarial Examples

Authors: Chaowei Xiao, Jun-Yan Zhu, Bo Li, Warren He, Mingyan Liu, Dawn Song | Published: 2018-01-08 | Updated: 2018-01-09
ロバスト性向上手法
敵対的学習
敵対的攻撃検出

Generating Adversarial Examples with Adversarial Networks

Authors: Chaowei Xiao, Bo Li, Jun-Yan Zhu, Warren He, Mingyan Liu, Dawn Song | Published: 2018-01-08 | Updated: 2019-02-14
敵対的サンプル
敵対的学習
敵対的攻撃検出

The Robust Manifold Defense: Adversarial Training using Generative Models

Authors: Ajil Jalal, Andrew Ilyas, Constantinos Daskalakis, Alexandros G. Dimakis | Published: 2017-12-26 | Updated: 2019-07-10
モデルの頑健性保証
敵対的サンプルの検知
敵対的学習

Query-Efficient Black-box Adversarial Examples (superceded)

Authors: Andrew Ilyas, Logan Engstrom, Anish Athalye, Jessy Lin | Published: 2017-12-19 | Updated: 2018-04-06
ポイズニング
敵対的学習
敵対的攻撃手法

Adversarial Examples: Attacks and Defenses for Deep Learning

Authors: Xiaoyong Yuan, Pan He, Qile Zhu, Xiaolin Li | Published: 2017-12-19 | Updated: 2018-07-07
敵対的スペクトル攻撃検出
敵対的学習
深層学習

DANCin SEQ2SEQ: Fooling Text Classifiers with Adversarial Text Example Generation

Authors: Catherine Wong | Published: 2017-12-14
スパム検出
バックドアモデルの検知
敵対的学習

Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models

Authors: Wieland Brendel, Jonas Rauber, Matthias Bethge | Published: 2017-12-12 | Updated: 2018-02-16
モデルの頑健性保証
敵対的学習
敵対的攻撃手法

Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning

Authors: Battista Biggio, Fabio Roli | Published: 2017-12-08 | Updated: 2018-07-19
ポイズニング
敵対的学習
敵対的攻撃手法