モデルの頑健性保証

Defending against Whitebox Adversarial Attacks via Randomized Discretization

Authors: Yuchen Zhang, Percy Liang | Published: 2019-03-25
モデルの頑健性保証
効果的な摂動手法
敵対的攻撃検出

Exploiting Excessive Invariance caused by Norm-Bounded Adversarial Robustness

Authors: Jörn-Henrik Jacobsen, Jens Behrmannn, Nicholas Carlini, Florian Tramèr, Nicolas Papernot | Published: 2019-03-25
モデルの頑健性保証
敵対的サンプルの脆弱性
敵対的攻撃検出

The LogBarrier adversarial attack: making effective use of decision boundary information

Authors: Chris Finlay, Aram-Alexandre Pooladian, Adam M. Oberman | Published: 2019-03-25
モデルの頑健性保証
効果的な摂動手法
敵対的学習

Robust Neural Networks using Randomized Adversarial Training

Authors: Alexandre Araujo, Laurent Meunier, Rafael Pinot, Benjamin Negrevergne | Published: 2019-03-25 | Updated: 2020-02-13
モデルの頑健性保証
敵対的学習
敵対的攻撃検出

On the Robustness of Deep K-Nearest Neighbors

Authors: Chawin Sitawarin, David Wagner | Published: 2019-03-20
モデルの頑健性保証
効果的な摂動手法
敵対的攻撃検出

Generating Adversarial Examples With Conditional Generative Adversarial Net

Authors: Ping Yu, Kaitao Song, Jianfeng Lu | Published: 2019-03-18
モデルの頑健性保証
敵対的サンプル
敵対的攻撃検出

A Research Agenda: Dynamic Models to Defend Against Correlated Attacks

Authors: Ian Goodfellow | Published: 2019-03-14
モデルの頑健性保証
動的サービススケジューリング
敵対的攻撃手法

Attribution-driven Causal Analysis for Detection of Adversarial Examples

Authors: Susmit Jha, Sunny Raj, Steven Lawrence Fernandes, Sumit Kumar Jha, Somesh Jha, Gunjan Verma, Brian Jalaian, Ananthram Swami | Published: 2019-03-14
モデルの頑健性保証
敵対的学習
敵対的攻撃手法

Simple Physical Adversarial Examples against End-to-End Autonomous Driving Models

Authors: Adith Boloor, Xin He, Christopher Gill, Yevgeniy Vorobeychik, Xuan Zhang | Published: 2019-03-12
モデルの頑健性保証
敵対的攻撃
物理攻撃

GanDef: A GAN based Adversarial Training Defense for Neural Network Classifier

Authors: Guanxiong Liu, Issa Khalil, Abdallah Khreishah | Published: 2019-03-06
モデルの頑健性保証
敵対的学習
敵対的訓練