Literature Database

Adversarial Regression with Multiple Learners

Authors: Liang Tong, Sixie Yu, Scott Alfeld, Yevgeniy Vorobeychik | Published: 2018-06-06
Poisoning
Loss Function
Adversarial Learning

Improving the Privacy and Accuracy of ADMM-Based Distributed Algorithms

Authors: Xueru Zhang, Mohammad Mahdi Khalili, Mingyan Liu | Published: 2018-06-06
Privacy Protection Method
Certified Robustness
Federated Learning

Killing four birds with one Gaussian process: the relation between different test-time attacks

Authors: Kathrin Grosse, Michael T. Smith, Michael Backes | Published: 2018-06-06 | Updated: 2020-11-29
Prompt leaking
Membership Inference
Watermark Evaluation

Set-based Obfuscation for Strong PUFs against Machine Learning Attacks

Authors: Jiliang Zhang, Chaoqun Shen | Published: 2018-06-06 | Updated: 2019-11-13
Cybersecurity
User Authentication System
Watermark Evaluation

Evidential Deep Learning to Quantify Classification Uncertainty

Authors: Murat Sensoy, Lance Kaplan, Melih Kandemir | Published: 2018-06-05 | Updated: 2018-10-31
Quantification of Uncertainty
Uncertainty Assessment
Deep Learning Method

An Explainable Adversarial Robustness Metric for Deep Learning Neural Networks

Authors: Chirag Agarwal, Bo Dong, Dan Schonfeld, Anthony Hoogs | Published: 2018-06-05 | Updated: 2018-06-06
Adversarial Example Detection
Adversarial Transferability
Watermark Evaluation

PAC-learning in the presence of evasion adversaries

Authors: Daniel Cullina, Arjun Nitin Bhagoji, Prateek Mittal | Published: 2018-06-05 | Updated: 2018-06-06
Certified Robustness
Loss Function
Adversarial Transferability

Mitigation of Policy Manipulation Attacks on Deep Q-Networks with Parameter-Space Noise

Authors: Vahid Behzadan, Arslan Munir | Published: 2018-06-04
Certified Robustness
Reinforcement Learning
Adversarial Example

ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models

Authors: Ahmed Salem, Yang Zhang, Mathias Humbert, Pascal Berrang, Mario Fritz, Michael Backes | Published: 2018-06-04 | Updated: 2018-12-14
Membership Inference
Model Extraction Attack
Watermark Evaluation

Sufficient Conditions for Idealised Models to Have No Adversarial Examples: a Theoretical and Empirical Study with Bayesian Neural Networks

Authors: Yarin Gal, Lewis Smith | Published: 2018-06-02 | Updated: 2018-06-28
Label Uncertainty
Adversarial Example
Adversarial Transferability