モデルの頑健性保証

LatentPoison – Adversarial Attacks On The Latent Space

Authors: Antonia Creswell, Anil A. Bharath, Biswa Sengupta | Published: 2017-11-08
ポイズニング
モデルの頑健性保証
敵対的攻撃

Provable defenses against adversarial examples via the convex outer adversarial polytope

Authors: Eric Wong, J. Zico Kolter | Published: 2017-11-02 | Updated: 2018-06-08
モデルの頑健性保証
ロバスト性
深層学習技術

Attacking Binarized Neural Networks

Authors: Angus Galloway, Graham W. Taylor, Medhat Moussa | Published: 2017-11-01 | Updated: 2018-01-31
モデルの頑健性保証
ロバスト性向上手法
敵対的サンプル

Attacking the Madry Defense Model with $L_1$-based Adversarial Examples

Authors: Yash Sharma, Pin-Yu Chen | Published: 2017-10-30 | Updated: 2018-07-27
モデルの頑健性保証
ロバスト性向上手法
敵対的サンプルの検知

Boosting Adversarial Attacks with Momentum

Authors: Yinpeng Dong, Fangzhou Liao, Tianyu Pang, Hang Su, Jun Zhu, Xiaolin Hu, Jianguo Li | Published: 2017-10-17 | Updated: 2018-03-22
モデルの頑健性保証
ロバスト性向上手法
敵対的サンプルの検知

Bayesian Hypernetworks

Authors: David Krueger, Chin-Wei Huang, Riashat Islam, Ryan Turner, Alexandre Lacoste, Aaron Courville | Published: 2017-10-13 | Updated: 2018-04-24
モデルの頑健性保証
モデル設計
ラベル

Mitigating Evasion Attacks to Deep Neural Networks via Region-based Classification

Authors: Xiaoyu Cao, Neil Zhenqiang Gong | Published: 2017-09-17 | Updated: 2019-12-31
モデルの頑健性保証
対抗的学習
敵対的サンプルの検知

EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples

Authors: Pin-Yu Chen, Yash Sharma, Huan Zhang, Jinfeng Yi, Cho-Jui Hsieh | Published: 2017-09-13 | Updated: 2018-02-10
モデルの頑健性保証
対抗的学習
敵対的サンプル

Ensemble Methods as a Defense to Adversarial Perturbations Against Deep Neural Networks

Authors: Thilo Strauss, Markus Hanselmann, Andrej Junginger, Holger Ulmer | Published: 2017-09-11 | Updated: 2018-02-08
モデルの頑健性保証
モデル性能評価
ロバスト性向上

Towards Proving the Adversarial Robustness of Deep Neural Networks

Authors: Guy Katz, Clark Barrett, David L. Dill, Kyle Julian, Mykel J. Kochenderfer | Published: 2017-09-08
モデルの頑健性保証
ロバスト性向上
対抗的学習