敵対的サンプル

Attacking Binarized Neural Networks

Authors: Angus Galloway, Graham W. Taylor, Medhat Moussa | Published: 2017-11-01 | Updated: 2018-01-31
モデルの頑健性保証
ロバスト性向上手法
敵対的サンプル

One pixel attack for fooling deep neural networks

Authors: Jiawei Su, Danilo Vasconcellos Vargas, Sakurai Kouichi | Published: 2017-10-24 | Updated: 2019-10-17
敵対的サンプル
敵対的サンプルの検知
構造的攻撃

EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples

Authors: Pin-Yu Chen, Yash Sharma, Huan Zhang, Jinfeng Yi, Cho-Jui Hsieh | Published: 2017-09-13 | Updated: 2018-02-10
モデルの頑健性保証
対抗的学習
敵対的サンプル

Learning Universal Adversarial Perturbations with Generative Models

Authors: Jamie Hayes, George Danezis | Published: 2017-08-17 | Updated: 2018-01-05
モデルの頑健性保証
攻撃手法
敵対的サンプル

Adversarial-Playground: A Visualization Suite Showing How Adversarial Examples Fool Deep Learning

Authors: Andrew P. Norton, Yanjun Qi | Published: 2017-08-01
教育的アプローチ
敵対的サンプル
画像分類手法

NO Need to Worry about Adversarial Examples in Object Detection in Autonomous Vehicles

Authors: Jiajun Lu, Hussein Sibai, Evan Fabry, David Forsyth | Published: 2017-07-12
敵対的サンプル
敵対的サンプルの検知
画像処理

Towards Deep Learning Models Resistant to Adversarial Attacks

Authors: Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, Adrian Vladu | Published: 2017-06-19 | Updated: 2019-09-04
モデルの頑健性保証
ロバスト性に関する評価
敵対的サンプル

Extending Defensive Distillation

Authors: Nicolas Papernot, Patrick McDaniel | Published: 2017-05-15
ロバスト性
敵対的サンプル
防御手法

Data Driven Exploratory Attacks on Black Box Classifiers in Adversarial Domains

Authors: Tegjyot Singh Sethi, Mehmed Kantardzic | Published: 2017-03-23
性能評価
攻撃パターン抽出
敵対的サンプル

Tactics of Adversarial Attack on Deep Reinforcement Learning Agents

Authors: Yen-Chen Lin, Zhang-Wei Hong, Yuan-Hong Liao, Meng-Li Shih, Ming-Yu Liu, Min Sun | Published: 2017-03-08 | Updated: 2019-11-13
攻撃パターン抽出
敵対的サンプル
防御メカニズム