敵対的学習

Adversarial Regression with Multiple Learners

Authors: Liang Tong, Sixie Yu, Scott Alfeld, Yevgeniy Vorobeychik | Published: 2018-06-06
ポイズニング
損失関数
敵対的学習

Detecting Adversarial Examples via Key-based Network

Authors: Pinlong Zhao, Zhouyu Fu, Ou wu, Qinghua Hu, Jun Wang | Published: 2018-06-02
敵対的学習
敵対的移転性
透かし評価

Towards Robust Training of Neural Networks by Regularizing Adversarial Gradients

Authors: Fuxun Yu, Zirui Xu, Yanzhi Wang, Chenchen Liu, Xiang Chen | Published: 2018-05-23 | Updated: 2018-06-07
モデルの堅牢性
敵対的学習
敵対的攻撃検出

Constructing Unrestricted Adversarial Examples with Generative Models

Authors: Yang Song, Rui Shu, Nate Kushman, Stefano Ermon | Published: 2018-05-21 | Updated: 2018-12-02
敵対的学習
敵対的攻撃検出
生成モデル

Curriculum Adversarial Training

Authors: Qi-Zhi Cai, Min Du, Chang Liu, Dawn Song | Published: 2018-05-13
データキュレーション
モデルの堅牢性
敵対的学習

Gradient Masking Causes CLEVER to Overestimate Adversarial Perturbation Size

Authors: Ian Goodfellow | Published: 2018-04-21
敵対的学習
敵対的攻撃手法
透かし技術

Learning More Robust Features with Adversarial Training

Authors: Shuangtao Li, Yuanke Chen, Yanlin Peng, Lin Bai | Published: 2018-04-20
対抗的学習
敵対的学習
透かし技術

Adversarial Attacks Against Medical Deep Learning Systems

Authors: Samuel G. Finlayson, Hyung Won Chung, Isaac S. Kohane, Andrew L. Beam | Published: 2018-04-15 | Updated: 2019-02-04
敵対的学習
敵対的攻撃分析
深層学習

Adversarial Training Versus Weight Decay

Authors: Angus Galloway, Thomas Tanay, Graham W. Taylor | Published: 2018-04-10 | Updated: 2018-07-23
モデルの頑健性保証
敵対的学習
敵対的攻撃

Bypassing Feature Squeezing by Increasing Adversary Strength

Authors: Yash Sharma, Pin-Yu Chen | Published: 2018-03-27
実験的検証
敵対的学習
敵対的攻撃