文献データベース

ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models

Authors: Ahmed Salem, Yang Zhang, Mathias Humbert, Pascal Berrang, Mario Fritz, Michael Backes | Published: 2018-06-04 | Updated: 2018-12-14
メンバーシップ推論
モデル抽出攻撃
透かし評価

Sufficient Conditions for Idealised Models to Have No Adversarial Examples: a Theoretical and Empirical Study with Bayesian Neural Networks

Authors: Yarin Gal, Lewis Smith | Published: 2018-06-02 | Updated: 2018-06-28
ラベル不確実性
敵対的サンプル
敵対的移転性

Detecting Adversarial Examples via Key-based Network

Authors: Pinlong Zhao, Zhouyu Fu, Ou wu, Qinghua Hu, Jun Wang | Published: 2018-06-02
敵対的学習
敵対的移転性
透かし評価

Tokenized Data Markets

Authors: Bharath Ramsundar, Roger Chen, Alok Vasudev, Rob Robbins, Artur Gorokh | Published: 2018-05-31
データ流分析
投票メカニズム
透かし評価

PeerNets: Exploiting Peer Wisdom Against Adversarial Attacks

Authors: Jan Svoboda, Jonathan Masci, Federico Monti, Michael M. Bronstein, Leonidas Guibas | Published: 2018-05-31
トリガーの検知
敵対的サンプルの検知
深層学習手法

Resisting Adversarial Attacks using Gaussian Mixture Variational Autoencoders

Authors: Partha Ghosh, Arpan Losalka, Michael J Black | Published: 2018-05-31 | Updated: 2018-12-10
モデルの頑健性保証
損失関数
敵対的サンプル

Defending Against Machine Learning Model Stealing Attacks Using Deceptive Perturbations

Authors: Taesung Lee, Benjamin Edwards, Ian Molloy, Dong Su | Published: 2018-05-31 | Updated: 2018-12-13
モデルの頑健性保証
モデル抽出攻撃の検知
透かし評価

Sequential Attacks on Agents for Long-Term Adversarial Goals

Authors: Edgar Tretschk, Seong Joon Oh, Mario Fritz | Published: 2018-05-31 | Updated: 2018-07-05
モデルの頑健性保証
強化学習
敵対的移転性

Greedy Attack and Gumbel Attack: Generating Adversarial Examples for Discrete Data

Authors: Puyudi Yang, Jianbo Chen, Cho-Jui Hsieh, Jane-Ling Wang, Michael I. Jordan | Published: 2018-05-31
敵対的移転性
特徴重要度分析
透かし評価

Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks

Authors: Kang Liu, Brendan Dolan-Gavitt, Siddharth Garg | Published: 2018-05-30
バックドアモデルの検知
攻撃手法
深層学習