敵対的攻撃検出

Knowledge Distillation with Adversarial Samples Supporting Decision Boundary

Authors: Byeongho Heo, Minsik Lee, Sangdoo Yun, Jin Young Choi | Published: 2018-05-15 | Updated: 2018-12-14
敵対的サンプル
敵対的攻撃検出
知識蒸留

Detecting Adversarial Samples for Deep Neural Networks through Mutation Testing

Authors: Jingyi Wang, Jun Sun, Peixin Zhang, Xinyu Wang | Published: 2018-05-14 | Updated: 2018-05-17
モデルの頑健性保証
敵対的サンプル
敵対的攻撃検出

Adversarial Malware Binaries: Evading Deep Learning for Malware Detection in Executables

Authors: Bojan Kolosnjaji, Ambra Demontis, Battista Biggio, Davide Maiorca, Giorgio Giacinto, Claudia Eckert, Fabio Roli | Published: 2018-03-12
マルウェア検出手法
敵対的攻撃検出
暗号化技術

Combating Adversarial Attacks Using Sparse Representations

Authors: Soorya Gopalakrishnan, Zhinus Marzi, Upamanyu Madhow, Ramtin Pedarsani | Published: 2018-03-11 | Updated: 2018-07-13
スパース表現
バックドアモデルの検知
敵対的攻撃検出

Towards Imperceptible and Robust Adversarial Example Attacks against Neural Networks

Authors: Bo Luo, Yannan Liu, Lingxiao Wei, Qiang Xu | Published: 2018-01-15
ロバスト性向上手法
敵対的サンプル
敵対的攻撃検出

A3T: Adversarially Augmented Adversarial Training

Authors: Akram Erraqabi, Aristide Baratin, Yoshua Bengio, Simon Lacoste-Julien | Published: 2018-01-12
モデルの頑健性保証
ロバスト性向上手法
敵対的攻撃検出

Less is More: Culling the Training Set to Improve Robustness of Deep Neural Networks

Authors: Yongshuai Liu, Jiyu Chen, Hao Chen | Published: 2018-01-09 | Updated: 2018-12-08
モデルの頑健性保証
敵対的サンプル
敵対的攻撃検出

Spatially Transformed Adversarial Examples

Authors: Chaowei Xiao, Jun-Yan Zhu, Bo Li, Warren He, Mingyan Liu, Dawn Song | Published: 2018-01-08 | Updated: 2018-01-09
ロバスト性向上手法
敵対的学習
敵対的攻撃検出

Generating Adversarial Examples with Adversarial Networks

Authors: Chaowei Xiao, Bo Li, Jun-Yan Zhu, Warren He, Mingyan Liu, Dawn Song | Published: 2018-01-08 | Updated: 2019-02-14
敵対的サンプル
敵対的学習
敵対的攻撃検出