ポイズニング

ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural Networks without Training Substitute Models

Authors: Pin-Yu Chen, Huan Zhang, Yash Sharma, Jinfeng Yi, Cho-Jui Hsieh | Published: 2017-08-14 | Updated: 2017-11-02
ポイズニング
モデルの頑健性保証
攻撃手法

Certified Defenses for Data Poisoning Attacks

Authors: Jacob Steinhardt, Pang Wei Koh, Percy Liang | Published: 2017-06-09 | Updated: 2017-11-24
ポイズニング
最適化問題
毒データの検知

Yes, Machine Learning Can Be More Secure! A Case Study on Android Malware Detection

Authors: Ambra Demontis, Marco Melis, Battista Biggio, Davide Maiorca, Daniel Arp, Konrad Rieck, Igino Corona, Giorgio Giacinto, Fabio Roli | Published: 2017-04-28
ポイズニング
マルウェア検出シナリオ
モデル抽出攻撃

Adequacy of the Gradient-Descent Method for Classifier Evasion Attacks

Authors: Yi Han, Benjamin I. P. Rubinstein | Published: 2017-04-06 | Updated: 2017-05-25
ポイズニング
モデルの頑健性保証
対抗的学習

Understanding Black-box Predictions via Influence Functions

Authors: Pang Wei Koh, Percy Liang | Published: 2017-03-14 | Updated: 2020-12-29
ポイズニング
学習の改善
説明可能性評価

Generative Poisoning Attack Method Against Neural Networks

Authors: Chaofei Yang, Qing Wu, Hai Li, Yiran Chen | Published: 2017-03-03
トリガーの検知
ポイズニング
生成モデル