Set-based Obfuscation for Strong PUFs against Machine Learning Attacks

Authors: Jiliang Zhang, Chaoqun Shen | Published: 2018-06-06 | Updated: 2019-11-13

Evidential Deep Learning to Quantify Classification Uncertainty

Authors: Murat Sensoy, Lance Kaplan, Melih Kandemir | Published: 2018-06-05 | Updated: 2018-10-31

An Explainable Adversarial Robustness Metric for Deep Learning Neural Networks

Authors: Chirag Agarwal, Bo Dong, Dan Schonfeld, Anthony Hoogs | Published: 2018-06-05 | Updated: 2018-06-06

PAC-learning in the presence of evasion adversaries

Authors: Daniel Cullina, Arjun Nitin Bhagoji, Prateek Mittal | Published: 2018-06-05 | Updated: 2018-06-06

Mitigation of Policy Manipulation Attacks on Deep Q-Networks with Parameter-Space Noise

Authors: Vahid Behzadan, Arslan Munir | Published: 2018-06-04

ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models

Authors: Ahmed Salem, Yang Zhang, Mathias Humbert, Pascal Berrang, Mario Fritz, Michael Backes | Published: 2018-06-04 | Updated: 2018-12-14

Sufficient Conditions for Idealised Models to Have No Adversarial Examples: a Theoretical and Empirical Study with Bayesian Neural Networks

Authors: Yarin Gal, Lewis Smith | Published: 2018-06-02 | Updated: 2018-06-28

Detecting Adversarial Examples via Key-based Network

Authors: Pinlong Zhao, Zhouyu Fu, Ou wu, Qinghua Hu, Jun Wang | Published: 2018-06-02

Tokenized Data Markets

Authors: Bharath Ramsundar, Roger Chen, Alok Vasudev, Rob Robbins, Artur Gorokh | Published: 2018-05-31

PeerNets: Exploiting Peer Wisdom Against Adversarial Attacks

Authors: Jan Svoboda, Jonathan Masci, Federico Monti, Michael M. Bronstein, Leonidas Guibas | Published: 2018-05-31