Transferable Clean-Label Poisoning Attacks on Deep Neural Nets Authors: Chen Zhu, W. Ronny Huang, Ali Shafahi, Hengduo Li, Gavin Taylor, Christoph Studer, Tom Goldstein | Published: 2019-05-15 | Updated: 2019-05-16 バックドア攻撃ポイズニング攻撃タイプ 2019.05.15 2025.04.03 文献データベース
Robustification of deep net classifiers by key based diversified aggregation with pre-filtering Authors: Olga Taran, Shideh Rezaeifar, Taras Holotyak, Slava Voloshynovskiy | Published: 2019-05-14 セキュアアグリゲーション性能評価攻撃タイプ 2019.05.14 2025.04.03 文献データベース
Adversarial-Playground: A Visualization Suite for Adversarial Sample Generation Authors: Andrew Norton, Yanjun Qi | Published: 2017-06-06 | Updated: 2017-06-16 モデルの頑健性保証攻撃タイプ敵対的学習 2017.06.06 2025.04.03 文献データベース
MagNet: a Two-Pronged Defense against Adversarial Examples Authors: Dongyu Meng, Hao Chen | Published: 2017-05-25 | Updated: 2017-09-11 攻撃タイプ敵対的サンプルの検知防御手法の効果分析 2017.05.25 2025.04.03 文献データベース
Black-Box Attacks against RNN based Malware Detection Algorithms Authors: Weiwei Hu, Ying Tan | Published: 2017-05-23 モデルの頑健性保証攻撃タイプ敵対的学習 2017.05.23 2025.04.03 文献データベース