敵対的学習

Improving Network Robustness against Adversarial Attacks with Compact Convolution

Authors: Rajeev Ranjan, Swami Sankaranarayanan, Carlos D. Castillo, Rama Chellappa | Published: 2017-12-03 | Updated: 2018-03-22
ロバスト性向上手法
敵対的サンプル
敵対的学習

Where Classification Fails, Interpretation Rises

Authors: Chanh Nguyen, Georgi Georgiev, Yujie Ji, Ting Wang | Published: 2017-12-02
FDI攻撃検出手法
モデルの頑健性保証
敵対的学習

Machine vs Machine: Minimax-Optimal Defense Against Adversarial Examples

Authors: Jihun Hamm, Akshay Mehra | Published: 2017-11-12 | Updated: 2018-06-27
ロバスト性向上
敵対的学習
敵対的攻撃分析

Intriguing Properties of Adversarial Examples

Authors: Ekin D. Cubuk, Barret Zoph, Samuel S. Schoenholz, Quoc V. Le | Published: 2017-11-08
敵対的サンプル
敵対的学習
敵対的攻撃

Adversarial Frontier Stitching for Remote Neural Network Watermarking

Authors: Erwan Le Merrer, Patrick Perez, Gilles Trédan | Published: 2017-11-06 | Updated: 2019-08-07
敵対的サンプル
敵対的学習
透かし設計

Implicit Weight Uncertainty in Neural Networks

Authors: Nick Pawlowski, Andrew Brock, Matthew C. H. Lee, Martin Rajchl, Ben Glocker | Published: 2017-11-03 | Updated: 2018-05-25
ロバスト性
敵対的学習
機械学習

Certifying Some Distributional Robustness with Principled Adversarial Training

Authors: Aman Sinha, Hongseok Namkoong, Riccardo Volpi, John Duchi | Published: 2017-10-29 | Updated: 2020-05-01
Wasserstein距離
ロバスト性向上手法
敵対的学習

Improving Robustness of ML Classifiers against Realizable Evasion Attacks Using Conserved Features

Authors: Liang Tong, Bo Li, Chen Hajaj, Chaowei Xiao, Ning Zhang, Yevgeniy Vorobeychik | Published: 2017-08-28 | Updated: 2019-05-10
モデル抽出攻撃
ロバスト性分析
敵対的学習

Cascade Adversarial Machine Learning Regularized with a Unified Embedding

Authors: Taesik Na, Jong Hwan Ko, Saibal Mukhopadhyay | Published: 2017-08-08 | Updated: 2018-03-17
ロバスト性分析
攻撃手法
敵対的学習

Adversarial-Playground: A Visualization Suite for Adversarial Sample Generation

Authors: Andrew Norton, Yanjun Qi | Published: 2017-06-06 | Updated: 2017-06-16
モデルの頑健性保証
攻撃タイプ
敵対的学習