Adversarial Attacks, Regression, and Numerical Stability Regularization

Authors: Andre T. Nguyen, Edward Raff | Published: 2018-12-07

Privacy Partitioning: Protecting User Data During the Deep Learning Inference Phase

Authors: Jianfeng Chi, Emmanuel Owusu, Xuwang Yin, Tong Yu, William Chan, Patrick Tague, Yuan Tian | Published: 2018-12-07

Knockoff Nets: Stealing Functionality of Black-Box Models

Authors: Tribhuvanesh Orekondy, Bernt Schiele, Mario Fritz | Published: 2018-12-06

The Limitations of Model Uncertainty in Adversarial Settings

Authors: Kathrin Grosse, David Pfaff, Michael Thomas Smith, Michael Backes | Published: 2018-12-06 | Updated: 2019-11-17

Prior Networks for Detection of Adversarial Attacks

Authors: Andrey Malinin, Mark Gales | Published: 2018-12-06

On Configurable Defense against Adversarial Example Attacks

Authors: Bo Luo, Min Li, Yu Li, Qiang Xu | Published: 2018-12-06

When Homomorphic Cryptosystem Meets Differential Privacy: Training Machine Learning Classifier with Privacy Protection

Authors: Xiangyun Tang, Liehuang Zhu, Meng Shen, Xiaojiang Du | Published: 2018-12-06

Differentially Private Data Generative Models

Authors: Qingrong Chen, Chong Xiang, Minhui Xue, Bo Li, Nikita Borisov, Dali Kaarfar, Haojin Zhu | Published: 2018-12-06

Calibrate: Frequency Estimation and Heavy Hitter Identification with Local Differential Privacy via Incorporating Prior Knowledge

Authors: Jinyuan Jia, Neil Zhenqiang Gong | Published: 2018-12-05 | Updated: 2018-12-11

Regularized Ensembles and Transferability in Adversarial Learning

Authors: Yifan Chen, Yevgeniy Vorobeychik | Published: 2018-12-05