ロバスト性向上

MixTrain: Scalable Training of Verifiably Robust Neural Networks

Authors: Shiqi Wang, Yizheng Chen, Ahmed Abdou, Suman Jana | Published: 2018-11-06 | Updated: 2018-12-01
モデル性能評価
ロバスト性向上
敵対的学習

SSCNets: Robustifying DNNs using Secure Selective Convolutional Filters

Authors: Hassan Ali, Faiq Khalid, Hammad Tariq, Muhammad Abdullah Hanif, Semeen Rehman, Rehan Ahmed, Muhammad Shafique | Published: 2018-11-04 | Updated: 2020-05-15
トリガーの検知
ロバスト性向上
攻撃の評価

Defensive Dropout for Hardening Deep Neural Networks under Adversarial Attacks

Authors: Siyue Wang, Xiao Wang, Pu Zhao, Wujie Wen, David Kaeli, Peter Chin, Xue Lin | Published: 2018-09-13
モデルの頑健性保証
ロバスト性向上
敵対的サンプル

Enhanced Attacks on Defensively Distilled Deep Neural Networks

Authors: Yujia Liu, Weiming Zhang, Shaohua Li, Nenghai Yu | Published: 2017-11-16
ロバスト性向上
敵対的サンプル
敵対的攻撃分析

Machine vs Machine: Minimax-Optimal Defense Against Adversarial Examples

Authors: Jihun Hamm, Akshay Mehra | Published: 2017-11-12 | Updated: 2018-06-27
ロバスト性向上
敵対的学習
敵対的攻撃分析

Ensemble Methods as a Defense to Adversarial Perturbations Against Deep Neural Networks

Authors: Thilo Strauss, Markus Hanselmann, Andrej Junginger, Holger Ulmer | Published: 2017-09-11 | Updated: 2018-02-08
モデルの頑健性保証
モデル性能評価
ロバスト性向上

Towards Proving the Adversarial Robustness of Deep Neural Networks

Authors: Guy Katz, Clark Barrett, David L. Dill, Kyle Julian, Mykel J. Kochenderfer | Published: 2017-09-08
モデルの頑健性保証
ロバスト性向上
対抗的学習