Improving the Privacy and Accuracy of ADMM-Based Distributed Algorithms

Authors: Xueru Zhang, Mohammad Mahdi Khalili, Mingyan Liu | Published: 2018-06-06

Killing four birds with one Gaussian process: the relation between different test-time attacks

Authors: Kathrin Grosse, Michael T. Smith, Michael Backes | Published: 2018-06-06 | Updated: 2020-11-29

Set-based Obfuscation for Strong PUFs against Machine Learning Attacks

Authors: Jiliang Zhang, Chaoqun Shen | Published: 2018-06-06 | Updated: 2019-11-13

Evidential Deep Learning to Quantify Classification Uncertainty

Authors: Murat Sensoy, Lance Kaplan, Melih Kandemir | Published: 2018-06-05 | Updated: 2018-10-31

An Explainable Adversarial Robustness Metric for Deep Learning Neural Networks

Authors: Chirag Agarwal, Bo Dong, Dan Schonfeld, Anthony Hoogs | Published: 2018-06-05 | Updated: 2018-06-06

PAC-learning in the presence of evasion adversaries

Authors: Daniel Cullina, Arjun Nitin Bhagoji, Prateek Mittal | Published: 2018-06-05 | Updated: 2018-06-06

Mitigation of Policy Manipulation Attacks on Deep Q-Networks with Parameter-Space Noise

Authors: Vahid Behzadan, Arslan Munir | Published: 2018-06-04

ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models

Authors: Ahmed Salem, Yang Zhang, Mathias Humbert, Pascal Berrang, Mario Fritz, Michael Backes | Published: 2018-06-04 | Updated: 2018-12-14

Sufficient Conditions for Idealised Models to Have No Adversarial Examples: a Theoretical and Empirical Study with Bayesian Neural Networks

Authors: Yarin Gal, Lewis Smith | Published: 2018-06-02 | Updated: 2018-06-28

Detecting Adversarial Examples via Key-based Network

Authors: Pinlong Zhao, Zhouyu Fu, Ou wu, Qinghua Hu, Jun Wang | Published: 2018-06-02