敵対的攻撃検出

Scaling up the randomized gradient-free adversarial attack reveals overestimation of robustness using established attacks

Authors: Francesco Croce, Jonas Rauber, Matthias Hein | Published: 2019-03-27 | Updated: 2019-09-25
トリガーの検知
敵対的学習
敵対的攻撃検出

A geometry-inspired decision-based attack

Authors: Yujia Liu, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard | Published: 2019-03-26
モデルの頑健性保証
効果的な摂動手法
敵対的攻撃検出

Defending against Whitebox Adversarial Attacks via Randomized Discretization

Authors: Yuchen Zhang, Percy Liang | Published: 2019-03-25
モデルの頑健性保証
効果的な摂動手法
敵対的攻撃検出

Exploiting Excessive Invariance caused by Norm-Bounded Adversarial Robustness

Authors: Jörn-Henrik Jacobsen, Jens Behrmannn, Nicholas Carlini, Florian Tramèr, Nicolas Papernot | Published: 2019-03-25
モデルの頑健性保証
敵対的サンプルの脆弱性
敵対的攻撃検出

Robust Neural Networks using Randomized Adversarial Training

Authors: Alexandre Araujo, Laurent Meunier, Rafael Pinot, Benjamin Negrevergne | Published: 2019-03-25 | Updated: 2020-02-13
モデルの頑健性保証
敵対的学習
敵対的攻撃検出

Data Poisoning against Differentially-Private Learners: Attacks and Defenses

Authors: Yuzhe Ma, Xiaojin Zhu, Justin Hsu | Published: 2019-03-23 | Updated: 2019-07-05
バックドア攻撃用の毒データの検知
敵対的攻撃検出
未ターゲット毒性攻撃

Improving Adversarial Robustness via Guided Complement Entropy

Authors: Hao-Yun Chen, Jhao-Hong Liang, Shih-Chieh Chang, Jia-Yu Pan, Yu-Ting Chen, Wei Wei, Da-Cheng Juan | Published: 2019-03-23 | Updated: 2019-08-07
ロバスト最適化
敵対的学習
敵対的攻撃検出

On the Robustness of Deep K-Nearest Neighbors

Authors: Chawin Sitawarin, David Wagner | Published: 2019-03-20
モデルの頑健性保証
効果的な摂動手法
敵対的攻撃検出

Clonability of anti-counterfeiting printable graphical codes: a machine learning approach

Authors: Olga Taran, Slavi Bonev, Slava Voloshynovskiy | Published: 2019-03-18
パフォーマンス評価
敵対的攻撃検出
深層学習モデル

Generating Adversarial Examples With Conditional Generative Adversarial Net

Authors: Ping Yu, Kaitao Song, Jianfeng Lu | Published: 2019-03-18
モデルの頑健性保証
敵対的サンプル
敵対的攻撃検出