敵対的サンプル

Intriguing Properties of Adversarial Examples

Authors: Ekin D. Cubuk, Barret Zoph, Samuel S. Schoenholz, Quoc V. Le | Published: 2017-11-08
敵対的サンプル
敵対的学習
敵対的攻撃

Adversarial Frontier Stitching for Remote Neural Network Watermarking

Authors: Erwan Le Merrer, Patrick Perez, Gilles Trédan | Published: 2017-11-06 | Updated: 2019-08-07
敵対的サンプル
敵対的学習
透かし設計

Attacking Binarized Neural Networks

Authors: Angus Galloway, Graham W. Taylor, Medhat Moussa | Published: 2017-11-01 | Updated: 2018-01-31
モデルの頑健性保証
ロバスト性向上手法
敵対的サンプル

One pixel attack for fooling deep neural networks

Authors: Jiawei Su, Danilo Vasconcellos Vargas, Sakurai Kouichi | Published: 2017-10-24 | Updated: 2019-10-17
敵対的サンプル
敵対的サンプルの検知
構造的攻撃

EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples

Authors: Pin-Yu Chen, Yash Sharma, Huan Zhang, Jinfeng Yi, Cho-Jui Hsieh | Published: 2017-09-13 | Updated: 2018-02-10
モデルの頑健性保証
対抗的学習
敵対的サンプル

Learning Universal Adversarial Perturbations with Generative Models

Authors: Jamie Hayes, George Danezis | Published: 2017-08-17 | Updated: 2018-01-05
モデルの頑健性保証
攻撃手法
敵対的サンプル

Adversarial-Playground: A Visualization Suite Showing How Adversarial Examples Fool Deep Learning

Authors: Andrew P. Norton, Yanjun Qi | Published: 2017-08-01
教育的アプローチ
敵対的サンプル
画像分類手法

NO Need to Worry about Adversarial Examples in Object Detection in Autonomous Vehicles

Authors: Jiajun Lu, Hussein Sibai, Evan Fabry, David Forsyth | Published: 2017-07-12
敵対的サンプル
敵対的サンプルの検知
画像処理

Towards Deep Learning Models Resistant to Adversarial Attacks

Authors: Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, Adrian Vladu | Published: 2017-06-19 | Updated: 2019-09-04
モデルの頑健性保証
ロバスト性に関する評価
敵対的サンプル

Extending Defensive Distillation

Authors: Nicolas Papernot, Patrick McDaniel | Published: 2017-05-15
ロバスト性
敵対的サンプル
防御手法