敵対的サンプル

DeepCloak: Adversarial Crafting As a Defensive Measure to Cloak Processes

Authors: Mehmet Sinan Inci, Thomas Eisenbarth, Berk Sunar | Published: 2018-08-03 | Updated: 2020-04-23
モデルの頑健性保証
敵対的サンプル
敵対的攻撃

Limitations of the Lipschitz constant as a defense against adversarial examples

Authors: Todd Huster, Cho-Yu Jason Chiang, Ritu Chadha | Published: 2018-07-25
モデル評価
ロバスト性に関する評価
敵対的サンプル

Motivating the Rules of the Game for Adversarial Example Research

Authors: Justin Gilmer, Ryan P. Adams, Ian Goodfellow, David Andersen, George E. Dahl | Published: 2018-07-18 | Updated: 2018-07-20
モデルの頑健性保証
敵対的サンプル
敵対的攻撃

Adversarial Perturbations Against Real-Time Video Classification Systems

Authors: Shasha Li, Ajaya Neupane, Sujoy Paul, Chengyu Song, Srikanth V. Krishnamurthy, Amit K. Roy Chowdhury, Ananthram Swami | Published: 2018-07-02
Dual-Purpose Universal Perturbations
効果的な摂動手法
敵対的サンプル

Adversarial Reprogramming of Neural Networks

Authors: Gamaleldin F. Elsayed, Ian Goodfellow, Jascha Sohl-Dickstein | Published: 2018-06-28 | Updated: 2018-11-29
モデルの頑健性保証
敵対的サンプル
透かし

Adversarial Distillation of Bayesian Neural Network Posteriors

Authors: Kuan-Chieh Wang, Paul Vicol, James Lucas, Li Gu, Roger Grosse, Richard Zemel | Published: 2018-06-27
モデルの頑健性保証
敵対的サンプル
深層学習技術

Hardware Trojan Attacks on Neural Networks

Authors: Joseph Clements, Yingjie Lao | Published: 2018-06-14
トリガーの検知
敵対的サンプル
深層学習技術

Defense Against the Dark Arts: An overview of adversarial example security research and future research directions

Authors: Ian Goodfellow | Published: 2018-06-11
モデルの頑健性保証
敵対的サンプル
敵対的訓練

Mitigation of Policy Manipulation Attacks on Deep Q-Networks with Parameter-Space Noise

Authors: Vahid Behzadan, Arslan Munir | Published: 2018-06-04
モデルの頑健性保証
強化学習
敵対的サンプル

Sufficient Conditions for Idealised Models to Have No Adversarial Examples: a Theoretical and Empirical Study with Bayesian Neural Networks

Authors: Yarin Gal, Lewis Smith | Published: 2018-06-02 | Updated: 2018-06-28
ラベル不確実性
敵対的サンプル
敵対的移転性