モデルの頑健性保証

Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks

Authors: Tribhuvanesh Orekondy, Bernt Schiele, Mario Fritz | Published: 2019-06-26 | Updated: 2020-03-03
モデルの頑健性保証
モデル抽出攻撃の検知
攻撃の評価

On the Vulnerability of CNN Classifiers in EEG-Based BCIs

Authors: Xiao Zhang, Dongrui Wu | Published: 2019-03-31
モデルの頑健性保証
敵対的学習
敵対的攻撃検出

Bridging Adversarial Robustness and Gradient Interpretability

Authors: Beomsu Kim, Junghoon Seo, Taegyun Jeon | Published: 2019-03-27 | Updated: 2019-04-19
モデルの頑健性保証
敵対的学習
解釈可能性

A geometry-inspired decision-based attack

Authors: Yujia Liu, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard | Published: 2019-03-26
モデルの頑健性保証
効果的な摂動手法
敵対的攻撃検出

Defending against Whitebox Adversarial Attacks via Randomized Discretization

Authors: Yuchen Zhang, Percy Liang | Published: 2019-03-25
モデルの頑健性保証
効果的な摂動手法
敵対的攻撃検出

Exploiting Excessive Invariance caused by Norm-Bounded Adversarial Robustness

Authors: Jörn-Henrik Jacobsen, Jens Behrmannn, Nicholas Carlini, Florian Tramèr, Nicolas Papernot | Published: 2019-03-25
モデルの頑健性保証
敵対的サンプルの脆弱性
敵対的攻撃検出

The LogBarrier adversarial attack: making effective use of decision boundary information

Authors: Chris Finlay, Aram-Alexandre Pooladian, Adam M. Oberman | Published: 2019-03-25
モデルの頑健性保証
効果的な摂動手法
敵対的学習

Robust Neural Networks using Randomized Adversarial Training

Authors: Alexandre Araujo, Laurent Meunier, Rafael Pinot, Benjamin Negrevergne | Published: 2019-03-25 | Updated: 2020-02-13
モデルの頑健性保証
敵対的学習
敵対的攻撃検出

On the Robustness of Deep K-Nearest Neighbors

Authors: Chawin Sitawarin, David Wagner | Published: 2019-03-20
モデルの頑健性保証
効果的な摂動手法
敵対的攻撃検出

Generating Adversarial Examples With Conditional Generative Adversarial Net

Authors: Ping Yu, Kaitao Song, Jianfeng Lu | Published: 2019-03-18
モデルの頑健性保証
敵対的サンプル
敵対的攻撃検出