ポイズニング

Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning

Authors: Battista Biggio, Fabio Roli | Published: 2017-12-08 | Updated: 2018-07-19
ポイズニング
敵対的学習
敵対的攻撃手法

LatentPoison – Adversarial Attacks On The Latent Space

Authors: Antonia Creswell, Anil A. Bharath, Biswa Sengupta | Published: 2017-11-08
ポイズニング
モデルの頑健性保証
敵対的攻撃

Practical Attacks Against Graph-based Clustering

Authors: Yizheng Chen, Yacin Nadji, Athanasios Kountouras, Fabian Monrose, Roberto Perdisci, Manos Antonakakis, Nikolaos Vasiloglou | Published: 2017-08-29
コミュニティ検出
ポイズニング
攻撃手法

Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization

Authors: Luis Muñoz-González, Battista Biggio, Ambra Demontis, Andrea Paudice, Vasin Wongrassamee, Emil C. Lupu, Fabio Roli | Published: 2017-08-29
ポイズニング
最適化手法
深層学習モデル

ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural Networks without Training Substitute Models

Authors: Pin-Yu Chen, Huan Zhang, Yash Sharma, Jinfeng Yi, Cho-Jui Hsieh | Published: 2017-08-14 | Updated: 2017-11-02
ポイズニング
モデルの頑健性保証
攻撃手法

Certified Defenses for Data Poisoning Attacks

Authors: Jacob Steinhardt, Pang Wei Koh, Percy Liang | Published: 2017-06-09 | Updated: 2017-11-24
ポイズニング
最適化問題
毒データの検知

Yes, Machine Learning Can Be More Secure! A Case Study on Android Malware Detection

Authors: Ambra Demontis, Marco Melis, Battista Biggio, Davide Maiorca, Daniel Arp, Konrad Rieck, Igino Corona, Giorgio Giacinto, Fabio Roli | Published: 2017-04-28
ポイズニング
マルウェア検出シナリオ
モデル抽出攻撃

Adequacy of the Gradient-Descent Method for Classifier Evasion Attacks

Authors: Yi Han, Benjamin I. P. Rubinstein | Published: 2017-04-06 | Updated: 2017-05-25
ポイズニング
モデルの頑健性保証
対抗的学習

Understanding Black-box Predictions via Influence Functions

Authors: Pang Wei Koh, Percy Liang | Published: 2017-03-14 | Updated: 2020-12-29
ポイズニング
学習の改善
説明可能性評価

Generative Poisoning Attack Method Against Neural Networks

Authors: Chaofei Yang, Qing Wu, Hai Li, Yiran Chen | Published: 2017-03-03
トリガーの検知
ポイズニング
生成モデル