敵対的攻撃手法

Can You Really Backdoor Federated Learning?

Authors: Ziteng Sun, Peter Kairouz, Ananda Theertha Suresh, H. Brendan McMahan | Published: 2019-11-18 | Updated: 2019-12-02
敵対的攻撃手法
脅威モデル
防御手法の効果分析

A New Ensemble Adversarial Attack Powered by Long-term Gradient Memories

Authors: Zhaohui Che, Ali Borji, Guangtao Zhai, Suiyi Ling, Jing Li, Patrick Le Callet | Published: 2019-11-18
バックドア攻撃
モデル性能評価
敵対的攻撃手法

Black-Box Adversarial Attack with Transferable Model-based Embedding

Authors: Zhichao Huang, Tong Zhang | Published: 2019-11-17 | Updated: 2020-01-05
敵対的サンプル
敵対的攻撃手法
知識移転性

Defending Against Model Stealing Attacks with Adaptive Misinformation

Authors: Sanjay Kariyappa, Moinuddin K Qureshi | Published: 2019-11-16
敵対的サンプル
敵対的攻撃手法
防御手法の効果分析

Suspicion-Free Adversarial Attacks on Clustering Algorithms

Authors: Anshuman Chhabra, Abhishek Roy, Prasant Mohapatra | Published: 2019-11-16
モデル性能評価
数値安定性の問題
敵対的攻撃手法

DomainGAN: Generating Adversarial Examples to Attack Domain Generation Algorithm Classifiers

Authors: Isaac Corley, Jonathan Lwowski, Justin Hoffman | Published: 2019-11-14 | Updated: 2020-02-14
ボットネット検出
モデル性能評価
敵対的攻撃手法

There is Limited Correlation between Coverage and Robustness for Deep Neural Networks

Authors: Yizhen Dong, Peixin Zhang, Jingyi Wang, Shuang Liu, Jun Sun, Jianye Hao, Xinyu Wang, Li Wang, Jin Song Dong, Dai Ting | Published: 2019-11-14
モデル性能評価
敵対的サンプル
敵対的攻撃手法

Adversarial Examples in Modern Machine Learning: A Review

Authors: Rey Reza Wiyatno, Anqi Xu, Ousmane Dia, Archy de Berker | Published: 2019-11-13 | Updated: 2019-11-15
ポイズニング
敵対的サンプル
敵対的攻撃手法

On Robustness to Adversarial Examples and Polynomial Optimization

Authors: Pranjal Awasthi, Abhratanu Dutta, Aravindan Vijayaraghavan | Published: 2019-11-12
モデル性能評価
敵対的攻撃手法
計算問題

Patch augmentation: Towards efficient decision boundaries for neural networks

Authors: Marcus D. Bloice, Peter M. Roth, Andreas Holzinger | Published: 2019-11-08 | Updated: 2019-11-25
モデル性能評価
敵対的攻撃手法
特徴エンジニアリング