敵対的サンプル

Towards Imperceptible and Robust Adversarial Example Attacks against Neural Networks

Authors: Bo Luo, Yannan Liu, Lingxiao Wei, Qiang Xu | Published: 2018-01-15
ロバスト性向上手法
敵対的サンプル
敵対的攻撃検出

Less is More: Culling the Training Set to Improve Robustness of Deep Neural Networks

Authors: Yongshuai Liu, Jiyu Chen, Hao Chen | Published: 2018-01-09 | Updated: 2018-12-08
モデルの頑健性保証
敵対的サンプル
敵対的攻撃検出

Generating Adversarial Examples with Adversarial Networks

Authors: Chaowei Xiao, Bo Li, Jun-Yan Zhu, Warren He, Mingyan Liu, Dawn Song | Published: 2018-01-08 | Updated: 2019-02-14
敵対的サンプル
敵対的学習
敵対的攻撃検出

Building Robust Deep Neural Networks for Road Sign Detection

Authors: Arkar Min Aung, Yousef Fadila, Radian Gondokaryono, Luis Gonzalez | Published: 2017-12-26
ロバスト性向上手法
敵対的サンプル
敵対的攻撃手法

When Not to Classify: Anomaly Detection of Attacks (ADA) on DNN Classifiers at Test Time

Authors: David J. Miller, Yulia Wang, George Kesidis | Published: 2017-12-18 | Updated: 2018-06-28
トリガーの検知
敵対的サンプル
敵対的攻撃手法

Improving Network Robustness against Adversarial Attacks with Compact Convolution

Authors: Rajeev Ranjan, Swami Sankaranarayanan, Carlos D. Castillo, Rama Chellappa | Published: 2017-12-03 | Updated: 2018-03-22
ロバスト性向上手法
敵対的サンプル
敵対的学習

Adversarial Phenomenon in the Eyes of Bayesian Deep Learning

Authors: Ambrish Rawat, Martin Wistuba, Maria-Irina Nicolae | Published: 2017-11-22
ベイズ深層学習
敵対的サンプル
敵対的攻撃手法

Enhanced Attacks on Defensively Distilled Deep Neural Networks

Authors: Yujia Liu, Weiming Zhang, Shaohua Li, Nenghai Yu | Published: 2017-11-16
ロバスト性向上
敵対的サンプル
敵対的攻撃分析

Intriguing Properties of Adversarial Examples

Authors: Ekin D. Cubuk, Barret Zoph, Samuel S. Schoenholz, Quoc V. Le | Published: 2017-11-08
敵対的サンプル
敵対的学習
敵対的攻撃

Adversarial Frontier Stitching for Remote Neural Network Watermarking

Authors: Erwan Le Merrer, Patrick Perez, Gilles Trédan | Published: 2017-11-06 | Updated: 2019-08-07
敵対的サンプル
敵対的学習
透かし設計