モデルの頑健性保証

Defense Against the Dark Arts: An overview of adversarial example security research and future research directions

Authors: Ian Goodfellow | Published: 2018-06-11
モデルの頑健性保証
敵対的サンプル
敵対的訓練

TAPAS: Tricks to Accelerate (encrypted) Prediction As a Service

Authors: Amartya Sanyal, Matt J. Kusner, Adrià Gascón, Varun Kanade | Published: 2018-06-09
モデルの頑健性保証
暗号化トラフィック検出
深層学習技術

Adversarial Attack on Graph Structured Data

Authors: Hanjun Dai, Hui Li, Tian Tian, Xin Huang, Lin Wang, Jun Zhu, Le Song | Published: 2018-06-06
グラフ表現学習
バックドア攻撃
モデルの頑健性保証

Improving the Privacy and Accuracy of ADMM-Based Distributed Algorithms

Authors: Xueru Zhang, Mohammad Mahdi Khalili, Mingyan Liu | Published: 2018-06-06
プライバシー保護手法
モデルの頑健性保証
連合学習

PAC-learning in the presence of evasion adversaries

Authors: Daniel Cullina, Arjun Nitin Bhagoji, Prateek Mittal | Published: 2018-06-05 | Updated: 2018-06-06
モデルの頑健性保証
損失関数
敵対的移転性

Mitigation of Policy Manipulation Attacks on Deep Q-Networks with Parameter-Space Noise

Authors: Vahid Behzadan, Arslan Munir | Published: 2018-06-04
モデルの頑健性保証
強化学習
敵対的サンプル

Resisting Adversarial Attacks using Gaussian Mixture Variational Autoencoders

Authors: Partha Ghosh, Arpan Losalka, Michael J Black | Published: 2018-05-31 | Updated: 2018-12-10
モデルの頑健性保証
損失関数
敵対的サンプル

Defending Against Machine Learning Model Stealing Attacks Using Deceptive Perturbations

Authors: Taesung Lee, Benjamin Edwards, Ian Molloy, Dong Su | Published: 2018-05-31 | Updated: 2018-12-13
モデルの頑健性保証
モデル抽出攻撃の検知
透かし評価

Sequential Attacks on Agents for Long-Term Adversarial Goals

Authors: Edgar Tretschk, Seong Joon Oh, Mario Fritz | Published: 2018-05-31 | Updated: 2018-07-05
モデルの頑健性保証
強化学習
敵対的移転性

Adversarial Noise Attacks of Deep Learning Architectures — Stability Analysis via Sparse Modeled Signals

Authors: Yaniv Romano, Aviad Aberdam, Jeremias Sulam, Michael Elad | Published: 2018-05-29 | Updated: 2019-08-05
スパース性最適化
モデルの頑健性保証
透かし評価