バックドア攻撃

Have You Stolen My Model? Evasion Attacks Against Deep Neural Network Watermarking Techniques

Authors: Dorjan Hitaj, Luigi V. Mancini | Published: 2018-09-03
バックドア攻撃
モデル抽出攻撃の検知
透明性と検証

Backdoor Embedding in Convolutional Neural Network Models via Invisible Perturbation

Authors: Cong Liao, Haoti Zhong, Anna Squicciarini, Sencun Zhu, David Miller | Published: 2018-08-30
バックドア攻撃
バックドア攻撃対策
ロバスト性分析

Adversarial Robustness Toolbox v1.0.0

Authors: Maria-Irina Nicolae, Mathieu Sinn, Minh Ngoc Tran, Beat Buesser, Ambrish Rawat, Martin Wistuba, Valentina Zantedeschi, Nathalie Baracaldo, Bryant Chen, Heiko Ludwig, Ian M. Molloy, Ben Edwards | Published: 2018-07-03 | Updated: 2019-11-15
バックドア攻撃
攻撃の評価
敵対的学習

Adversarial Attack on Graph Structured Data

Authors: Hanjun Dai, Hui Li, Tian Tian, Xin Huang, Lin Wang, Jun Zhu, Le Song | Published: 2018-06-06
グラフ表現学習
バックドア攻撃
モデルの頑健性保証

Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks

Authors: Ali Shafahi, W. Ronny Huang, Mahyar Najibi, Octavian Suciu, Christoph Studer, Tudor Dumitras, Tom Goldstein | Published: 2018-04-03 | Updated: 2018-11-10
バックドア攻撃
ポイズニング
毒データの検知

BEBP: An Poisoning Method Against Machine Learning Based IDSs

Authors: Pan Li, Qiang Liu, Wentao Zhao, Dongxu Wang, Siqi Wang | Published: 2018-03-11
データ生成手法
バックドア攻撃
毒データの検知

Generic Black-Box End-to-End Attack Against State of the Art API Call Based Malware Classifiers

Authors: Ishai Rosenberg, Asaf Shabtai, Lior Rokach, Yuval Elovici | Published: 2017-07-19 | Updated: 2018-06-24
バックドア攻撃
マルウェア分類のためのデータセット
モデルの頑健性保証

Fraternal Twins: Unifying Attacks on Machine Learning and Digital Watermarking

Authors: Erwin Quiring, Daniel Arp, Konrad Rieck | Published: 2017-03-16
バックドア攻撃
攻撃パターン抽出
防御メカニズム