敵対的攻撃検出

Updates-Leak: Data Set Inference and Reconstruction Attacks in Online Learning

Authors: Ahmed Salem, Apratim Bhattacharya, Michael Backes, Mario Fritz, Yang Zhang | Published: 2019-04-01 | Updated: 2019-11-30
モデル抽出攻撃
再構成攻撃
敵対的攻撃検出

Defending against adversarial attacks by randomized diversification

Authors: Olga Taran, Shideh Rezaeifar, Taras Holotyak, Slava Voloshynovskiy | Published: 2019-04-01
敵対的サンプルの検知
敵対的攻撃検出
透かしの耐久性

On the Vulnerability of CNN Classifiers in EEG-Based BCIs

Authors: Xiao Zhang, Dongrui Wu | Published: 2019-03-31
モデルの頑健性保証
敵対的学習
敵対的攻撃検出

Benchmarking Neural Network Robustness to Common Corruptions and Perturbations

Authors: Dan Hendrycks, Thomas Dietterich | Published: 2019-03-28
ロバスト最適化
敵対的学習
敵対的攻撃検出

Rallying Adversarial Techniques against Deep Learning for Network Security

Authors: Joseph Clements, Yuzhe Yang, Ankur Sharma, Hongxin Hu, Yingjie Lao | Published: 2019-03-27 | Updated: 2021-10-25
効果的な摂動手法
敵対的学習
敵対的攻撃検出

Scaling up the randomized gradient-free adversarial attack reveals overestimation of robustness using established attacks

Authors: Francesco Croce, Jonas Rauber, Matthias Hein | Published: 2019-03-27 | Updated: 2019-09-25
トリガーの検知
敵対的学習
敵対的攻撃検出

A geometry-inspired decision-based attack

Authors: Yujia Liu, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard | Published: 2019-03-26
モデルの頑健性保証
効果的な摂動手法
敵対的攻撃検出

Defending against Whitebox Adversarial Attacks via Randomized Discretization

Authors: Yuchen Zhang, Percy Liang | Published: 2019-03-25
モデルの頑健性保証
効果的な摂動手法
敵対的攻撃検出

Exploiting Excessive Invariance caused by Norm-Bounded Adversarial Robustness

Authors: Jörn-Henrik Jacobsen, Jens Behrmannn, Nicholas Carlini, Florian Tramèr, Nicolas Papernot | Published: 2019-03-25
モデルの頑健性保証
敵対的サンプルの脆弱性
敵対的攻撃検出

Robust Neural Networks using Randomized Adversarial Training

Authors: Alexandre Araujo, Laurent Meunier, Rafael Pinot, Benjamin Negrevergne | Published: 2019-03-25 | Updated: 2020-02-13
モデルの頑健性保証
敵対的学習
敵対的攻撃検出