モデルの頑健性保証

Detecting Adversarial Samples for Deep Neural Networks through Mutation Testing

Authors: Jingyi Wang, Jun Sun, Peixin Zhang, Xinyu Wang | Published: 2018-05-14 | Updated: 2018-05-17
モデルの頑健性保証
敵対的サンプル
敵対的攻撃検出

How Robust are Deep Neural Networks?

Authors: Biswa Sengupta, Karl J. Friston | Published: 2018-04-30
モデルの頑健性保証
深層学習に基づくIDS
透かし技術

Query-Efficient Black-Box Attack Against Sequence-Based Malware Classifiers

Authors: Ishai Rosenberg, Asaf Shabtai, Yuval Elovici, Lior Rokach | Published: 2018-04-23 | Updated: 2020-10-03
クエリ生成手法
モデルの頑健性保証
敵対的攻撃手法

ShapeShifter: Robust Physical Adversarial Attack on Faster R-CNN Object Detector

Authors: Shang-Tse Chen, Cory Cornelius, Jason Martin, Duen Horng Chau | Published: 2018-04-16 | Updated: 2019-05-01
Faster R-CNN
モデルの頑健性保証
敵対的攻撃手法

On the Robustness of the CVPR 2018 White-Box Adversarial Example Defenses

Authors: Anish Athalye, Nicholas Carlini | Published: 2018-04-10
モデルの頑健性保証
敵対的攻撃
透かし

Adversarial Training Versus Weight Decay

Authors: Angus Galloway, Thomas Tanay, Graham W. Taylor | Published: 2018-04-10 | Updated: 2018-07-23
モデルの頑健性保証
敵対的学習
敵対的攻撃

Fortified Networks: Improving the Robustness of Deep Networks by Modeling the Manifold of Hidden Representations

Authors: Alex Lamb, Jonathan Binas, Anirudh Goyal, Dmitriy Serdyuk, Sandeep Subramanian, Ioannis Mitliagkas, Yoshua Bengio | Published: 2018-04-07
モデルの頑健性保証
敵対的攻撃
深層ネットワークの堅牢性

Adversarial Attacks and Defences Competition

Authors: Alexey Kurakin, Ian Goodfellow, Samy Bengio, Yinpeng Dong, Fangzhou Liao, Ming Liang, Tianyu Pang, Jun Zhu, Xiaolin Hu, Cihang Xie, Jianyu Wang, Zhishuai Zhang, Zhou Ren, Alan Yuille, Sangxia Huang, Yao Zhao, Yuzhe Zhao, Zhonglin Han, Junjiajia Long, Yerkebulan Berdibekov, Takuya Akiba, Seiya Tokui, Motoki Abe | Published: 2018-03-31
モデルの頑健性保証
敵対的攻撃
深層ネットワークの堅牢性

Defending against Adversarial Images using Basis Functions Transformations

Authors: Uri Shaham, James Garritano, Yutaro Yamada, Ethan Weinberger, Alex Cloninger, Xiuyuan Cheng, Kelly Stanton, Yuval Kluger | Published: 2018-03-28 | Updated: 2018-04-16
ウォーターマーキング
モデルの頑健性保証
敵対的攻撃

Improving DNN Robustness to Adversarial Attacks using Jacobian Regularization

Authors: Daniel Jakubovitz, Raja Giryes | Published: 2018-03-23 | Updated: 2019-05-28
モデルの頑健性保証
敵対的学習
正則化