敵対的学習

Are Generative Classifiers More Robust to Adversarial Attacks?

Authors: Yingzhen Li, John Bradshaw, Yash Sharma | Published: 2018-02-19 | Updated: 2019-05-27
ロバスト性評価
敵対的学習
敵対的攻撃

Adversarial Risk and the Dangers of Evaluating Against Weak Attacks

Authors: Jonathan Uesato, Brendan O'Donoghue, Aaron van den Oord, Pushmeet Kohli | Published: 2018-02-15 | Updated: 2018-06-12
対抗的学習
敵対的学習
敵対的攻撃

Distributed One-class Learning

Authors: Ali Shahin Shamsabadi, Hamed Haddadi, Andrea Cavallaro | Published: 2018-02-10
プライバシー保護メカニズム
敵対的学習
機械学習手法

Certified Robustness to Adversarial Examples with Differential Privacy

Authors: Mathias Lecuyer, Vaggelis Atlidakis, Roxana Geambasu, Daniel Hsu, Suman Jana | Published: 2018-02-09 | Updated: 2019-05-29
ロバスト性評価
敵対的サンプル
敵対的学習

Blind Pre-Processing: A Robust Defense Method Against Adversarial Examples

Authors: Adnan Siraj Rakin, Zhezhi He, Boqing Gong, Deliang Fan | Published: 2018-02-05 | Updated: 2018-02-07
データ前処理
モデルの頑健性保証
敵対的学習

Sparsity-based Defense against Adversarial Attacks on Linear Classifiers

Authors: Zhinus Marzi, Soorya Gopalakrishnan, Upamanyu Madhow, Ramtin Pedarsani | Published: 2018-01-15 | Updated: 2018-06-19
スパース性防御
敵対的学習
敵対的攻撃

Spatially Transformed Adversarial Examples

Authors: Chaowei Xiao, Jun-Yan Zhu, Bo Li, Warren He, Mingyan Liu, Dawn Song | Published: 2018-01-08 | Updated: 2018-01-09
ロバスト性向上手法
敵対的学習
敵対的攻撃検出

Generating Adversarial Examples with Adversarial Networks

Authors: Chaowei Xiao, Bo Li, Jun-Yan Zhu, Warren He, Mingyan Liu, Dawn Song | Published: 2018-01-08 | Updated: 2019-02-14
敵対的サンプル
敵対的学習
敵対的攻撃検出

The Robust Manifold Defense: Adversarial Training using Generative Models

Authors: Ajil Jalal, Andrew Ilyas, Constantinos Daskalakis, Alexandros G. Dimakis | Published: 2017-12-26 | Updated: 2019-07-10
モデルの頑健性保証
敵対的サンプルの検知
敵対的学習

Query-Efficient Black-box Adversarial Examples (superceded)

Authors: Andrew Ilyas, Logan Engstrom, Anish Athalye, Jessy Lin | Published: 2017-12-19 | Updated: 2018-04-06
ポイズニング
敵対的学習
敵対的攻撃手法