文献データベース

An Explainable Adversarial Robustness Metric for Deep Learning Neural Networks

Authors: Chirag Agarwal, Bo Dong, Dan Schonfeld, Anthony Hoogs | Published: 2018-06-05 | Updated: 2018-06-06
敵対的サンプルの検知
敵対的移転性
透かし評価

PAC-learning in the presence of evasion adversaries

Authors: Daniel Cullina, Arjun Nitin Bhagoji, Prateek Mittal | Published: 2018-06-05 | Updated: 2018-06-06
モデルの頑健性保証
損失関数
敵対的移転性

Mitigation of Policy Manipulation Attacks on Deep Q-Networks with Parameter-Space Noise

Authors: Vahid Behzadan, Arslan Munir | Published: 2018-06-04
モデルの頑健性保証
強化学習
敵対的サンプル

ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models

Authors: Ahmed Salem, Yang Zhang, Mathias Humbert, Pascal Berrang, Mario Fritz, Michael Backes | Published: 2018-06-04 | Updated: 2018-12-14
メンバーシップ推論
モデル抽出攻撃
透かし評価

Sufficient Conditions for Idealised Models to Have No Adversarial Examples: a Theoretical and Empirical Study with Bayesian Neural Networks

Authors: Yarin Gal, Lewis Smith | Published: 2018-06-02 | Updated: 2018-06-28
ラベル不確実性
敵対的サンプル
敵対的移転性

Detecting Adversarial Examples via Key-based Network

Authors: Pinlong Zhao, Zhouyu Fu, Ou wu, Qinghua Hu, Jun Wang | Published: 2018-06-02
敵対的学習
敵対的移転性
透かし評価

Tokenized Data Markets

Authors: Bharath Ramsundar, Roger Chen, Alok Vasudev, Rob Robbins, Artur Gorokh | Published: 2018-05-31
データ流分析
投票メカニズム
透かし評価

PeerNets: Exploiting Peer Wisdom Against Adversarial Attacks

Authors: Jan Svoboda, Jonathan Masci, Federico Monti, Michael M. Bronstein, Leonidas Guibas | Published: 2018-05-31
トリガーの検知
敵対的サンプルの検知
深層学習手法

Resisting Adversarial Attacks using Gaussian Mixture Variational Autoencoders

Authors: Partha Ghosh, Arpan Losalka, Michael J Black | Published: 2018-05-31 | Updated: 2018-12-10
モデルの頑健性保証
損失関数
敵対的サンプル

Defending Against Machine Learning Model Stealing Attacks Using Deceptive Perturbations

Authors: Taesung Lee, Benjamin Edwards, Ian Molloy, Dong Su | Published: 2018-05-31 | Updated: 2018-12-13
モデルの頑健性保証
モデル抽出攻撃の検知
透かし評価