ロバスト性評価

Towards Understanding Adversarial Examples Systematically: Exploring Data Size, Task and Model Factors

Authors: Ke Sun, Zhanxing Zhu, Zhouchen Lin | Published: 2019-02-28
ロバスト性評価
敵対的サンプルの検知
敵対的学習

Function Space Particle Optimization for Bayesian Neural Networks

Authors: Ziyu Wang, Tongzheng Ren, Jun Zhu, Bo Zhang | Published: 2019-02-26 | Updated: 2019-05-08
ロバスト性評価
収束特性
最適化アルゴリズムの選択と評価

Adversarial attacks hidden in plain sight

Authors: Jan Philip Göpfert, André Artelt, Heiko Wersing, Barbara Hammer | Published: 2019-02-25 | Updated: 2020-04-26
バックドア攻撃
ロバスト性評価
敵対的学習

A Convex Relaxation Barrier to Tight Robustness Verification of Neural Networks

Authors: Hadi Salman, Greg Yang, Huan Zhang, Cho-Jui Hsieh, Pengchuan Zhang | Published: 2019-02-23 | Updated: 2020-01-10
モデルの頑健性保証
ロバスト性評価
敵対的学習

The Limitations of Model Uncertainty in Adversarial Settings

Authors: Kathrin Grosse, David Pfaff, Michael Thomas Smith, Michael Backes | Published: 2018-12-06 | Updated: 2019-11-17
モデルの頑健性保証
ロバスト性評価
敵対的サンプル

Prior Networks for Detection of Adversarial Attacks

Authors: Andrey Malinin, Mark Gales | Published: 2018-12-06
モデル抽出攻撃の検知
ロバスト性評価
敵対的学習

Are Generative Classifiers More Robust to Adversarial Attacks?

Authors: Yingzhen Li, John Bradshaw, Yash Sharma | Published: 2018-02-19 | Updated: 2019-05-27
ロバスト性評価
敵対的学習
敵対的攻撃

Certified Robustness to Adversarial Examples with Differential Privacy

Authors: Mathias Lecuyer, Vaggelis Atlidakis, Roxana Geambasu, Daniel Hsu, Suman Jana | Published: 2018-02-09 | Updated: 2019-05-29
ロバスト性評価
敵対的サンプル
敵対的学習

Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approach

Authors: Tsui-Wei Weng, Huan Zhang, Pin-Yu Chen, Jinfeng Yi, Dong Su, Yupeng Gao, Cho-Jui Hsieh, Luca Daniel | Published: 2018-01-31
モデルの頑健性保証
ロバスト性評価
敵対的攻撃