文献データベース

Knockoff Nets: Stealing Functionality of Black-Box Models

Authors: Tribhuvanesh Orekondy, Bernt Schiele, Mario Fritz | Published: 2018-12-06
モデル抽出攻撃
医療画像分析
強化学習

The Limitations of Model Uncertainty in Adversarial Settings

Authors: Kathrin Grosse, David Pfaff, Michael Thomas Smith, Michael Backes | Published: 2018-12-06 | Updated: 2019-11-17
モデルの頑健性保証
ロバスト性評価
敵対的サンプル

Prior Networks for Detection of Adversarial Attacks

Authors: Andrey Malinin, Mark Gales | Published: 2018-12-06
モデル抽出攻撃の検知
ロバスト性評価
敵対的学習

On Configurable Defense against Adversarial Example Attacks

Authors: Bo Luo, Min Li, Yu Li, Qiang Xu | Published: 2018-12-06
敵対的サンプル
敵対的学習
防御手法

When Homomorphic Cryptosystem Meets Differential Privacy: Training Machine Learning Classifier with Privacy Protection

Authors: Xiangyun Tang, Liehuang Zhu, Meng Shen, Xiaojiang Du | Published: 2018-12-06
パフォーマンス評価
プライバシー保護
差分プライバシー

Differentially Private Data Generative Models

Authors: Qingrong Chen, Chong Xiang, Minhui Xue, Bo Li, Nikita Borisov, Dali Kaarfar, Haojin Zhu | Published: 2018-12-06
モデルインバージョン
差分プライバシー
生成モデルの課題

Calibrate: Frequency Estimation and Heavy Hitter Identification with Local Differential Privacy via Incorporating Prior Knowledge

Authors: Jinyuan Jia, Neil Zhenqiang Gong | Published: 2018-12-05 | Updated: 2018-12-11
データ収集
一般化性能
確率分布

Regularized Ensembles and Transferability in Adversarial Learning

Authors: Yifan Chen, Yevgeniy Vorobeychik | Published: 2018-12-05
モデルの頑健性保証
一般化性能
知識移転性

Random Spiking and Systematic Evaluation of Defenses Against Adversarial Examples

Authors: Huangyi Ge, Sze Yiu Chau, Bruno Ribeiro, Ninghui Li | Published: 2018-12-05 | Updated: 2020-01-20
モデルの頑健性保証
敵対的サンプル
防御手法

Outsourcing Private Machine Learning via Lightweight Secure Arithmetic Computation

Authors: Siddharth Garg, Zahra Ghodsi, Carmit Hazay, Yuval Ishai, Antonio Marcedone, Muthuramakrishnan Venkitasubramaniam | Published: 2018-12-04
医療画像分析
安全な算術計算
差分プライバシー