モデルの頑健性保証

Mitigating Evasion Attacks to Deep Neural Networks via Region-based Classification

Authors: Xiaoyu Cao, Neil Zhenqiang Gong | Published: 2017-09-17 | Updated: 2019-12-31
モデルの頑健性保証
対抗的学習
敵対的サンプルの検知

EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples

Authors: Pin-Yu Chen, Yash Sharma, Huan Zhang, Jinfeng Yi, Cho-Jui Hsieh | Published: 2017-09-13 | Updated: 2018-02-10
モデルの頑健性保証
対抗的学習
敵対的サンプル

Ensemble Methods as a Defense to Adversarial Perturbations Against Deep Neural Networks

Authors: Thilo Strauss, Markus Hanselmann, Andrej Junginger, Holger Ulmer | Published: 2017-09-11 | Updated: 2018-02-08
モデルの頑健性保証
モデル性能評価
ロバスト性向上

Towards Proving the Adversarial Robustness of Deep Neural Networks

Authors: Guy Katz, Clark Barrett, David L. Dill, Kyle Julian, Mykel J. Kochenderfer | Published: 2017-09-08
モデルの頑健性保証
ロバスト性向上
対抗的学習

Learning Universal Adversarial Perturbations with Generative Models

Authors: Jamie Hayes, George Danezis | Published: 2017-08-17 | Updated: 2018-01-05
モデルの頑健性保証
攻撃手法
敵対的サンプル

ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural Networks without Training Substitute Models

Authors: Pin-Yu Chen, Huan Zhang, Yash Sharma, Jinfeng Yi, Cho-Jui Hsieh | Published: 2017-08-14 | Updated: 2017-11-02
ポイズニング
モデルの頑健性保証
攻撃手法

Generic Black-Box End-to-End Attack Against State of the Art API Call Based Malware Classifiers

Authors: Ishai Rosenberg, Asaf Shabtai, Lior Rokach, Yuval Elovici | Published: 2017-07-19 | Updated: 2018-06-24
バックドア攻撃
マルウェア分類のためのデータセット
モデルの頑健性保証

Houdini: Fooling Deep Structured Prediction Models

Authors: Moustapha Cisse, Yossi Adi, Natalia Neverova, Joseph Keshet | Published: 2017-07-17
モデルの頑健性保証
敵対的攻撃評価
音声認識技術

Foolbox: A Python toolbox to benchmark the robustness of machine learning models

Authors: Jonas Rauber, Wieland Brendel, Matthias Bethge | Published: 2017-07-13 | Updated: 2018-03-20
フレームワークサポート
モデルの頑健性保証
ロバスト性の要件

A Survey on Resilient Machine Learning

Authors: Atul Kumar, Sameep Mehta | Published: 2017-07-11
モデルインバージョン
モデルの頑健性保証
モデル抽出攻撃