モデルの頑健性保証

Adversarial Distillation of Bayesian Neural Network Posteriors

Authors: Kuan-Chieh Wang, Paul Vicol, James Lucas, Li Gu, Roger Grosse, Richard Zemel | Published: 2018-06-27
モデルの頑健性保証
敵対的サンプル
深層学習技術

Built-in Vulnerabilities to Imperceptible Adversarial Perturbations

Authors: Thomas Tanay, Jerone T. A. Andrews, Lewis D. Griffin | Published: 2018-06-19 | Updated: 2019-05-07
モデルの頑健性保証
敵対的学習
敵対的訓練

Copycat CNN: Stealing Knowledge by Persuading Confession with Random Non-Labeled Data

Authors: Jacson Rodrigues Correia-Silva, Rodrigo F. Berriel, Claudine Badue, Alberto F. de Souza, Thiago Oliveira-Santos | Published: 2018-06-14
ポイズニング
モデルの頑健性保証
顔認識システム

Defense Against the Dark Arts: An overview of adversarial example security research and future research directions

Authors: Ian Goodfellow | Published: 2018-06-11
モデルの頑健性保証
敵対的サンプル
敵対的訓練

TAPAS: Tricks to Accelerate (encrypted) Prediction As a Service

Authors: Amartya Sanyal, Matt J. Kusner, Adrià Gascón, Varun Kanade | Published: 2018-06-09
モデルの頑健性保証
暗号化トラフィック検出
深層学習技術

Adversarial Attack on Graph Structured Data

Authors: Hanjun Dai, Hui Li, Tian Tian, Xin Huang, Lin Wang, Jun Zhu, Le Song | Published: 2018-06-06
グラフ表現学習
バックドア攻撃
モデルの頑健性保証

Improving the Privacy and Accuracy of ADMM-Based Distributed Algorithms

Authors: Xueru Zhang, Mohammad Mahdi Khalili, Mingyan Liu | Published: 2018-06-06
プライバシー保護手法
モデルの頑健性保証
連合学習

PAC-learning in the presence of evasion adversaries

Authors: Daniel Cullina, Arjun Nitin Bhagoji, Prateek Mittal | Published: 2018-06-05 | Updated: 2018-06-06
モデルの頑健性保証
損失関数
敵対的移転性

Mitigation of Policy Manipulation Attacks on Deep Q-Networks with Parameter-Space Noise

Authors: Vahid Behzadan, Arslan Munir | Published: 2018-06-04
モデルの頑健性保証
強化学習
敵対的サンプル

Resisting Adversarial Attacks using Gaussian Mixture Variational Autoencoders

Authors: Partha Ghosh, Arpan Losalka, Michael J Black | Published: 2018-05-31 | Updated: 2018-12-10
モデルの頑健性保証
損失関数
敵対的サンプル