Certified Robustness

Built-in Vulnerabilities to Imperceptible Adversarial Perturbations

Authors: Thomas Tanay, Jerone T. A. Andrews, Lewis D. Griffin | Published: 2018-06-19 | Updated: 2019-05-07
Certified Robustness
Adversarial Learning
Adversarial Training

Copycat CNN: Stealing Knowledge by Persuading Confession with Random Non-Labeled Data

Authors: Jacson Rodrigues Correia-Silva, Rodrigo F. Berriel, Claudine Badue, Alberto F. de Souza, Thiago Oliveira-Santos | Published: 2018-06-14
Poisoning
Certified Robustness
Face Recognition System

Defense Against the Dark Arts: An overview of adversarial example security research and future research directions

Authors: Ian Goodfellow | Published: 2018-06-11
Certified Robustness
Adversarial Example
Adversarial Training

TAPAS: Tricks to Accelerate (encrypted) Prediction As a Service

Authors: Amartya Sanyal, Matt J. Kusner, Adrià Gascón, Varun Kanade | Published: 2018-06-09
Certified Robustness
Encrypted Traffic Detection
Deep Learning Technology

Adversarial Attack on Graph Structured Data

Authors: Hanjun Dai, Hui Li, Tian Tian, Xin Huang, Lin Wang, Jun Zhu, Le Song | Published: 2018-06-06
Graph Representation Learning
Backdoor Attack
Certified Robustness

Improving the Privacy and Accuracy of ADMM-Based Distributed Algorithms

Authors: Xueru Zhang, Mohammad Mahdi Khalili, Mingyan Liu | Published: 2018-06-06
Privacy Protection Method
Certified Robustness
Federated Learning

PAC-learning in the presence of evasion adversaries

Authors: Daniel Cullina, Arjun Nitin Bhagoji, Prateek Mittal | Published: 2018-06-05 | Updated: 2018-06-06
Certified Robustness
Loss Function
Adversarial Transferability

Mitigation of Policy Manipulation Attacks on Deep Q-Networks with Parameter-Space Noise

Authors: Vahid Behzadan, Arslan Munir | Published: 2018-06-04
Certified Robustness
Reinforcement Learning
Adversarial Example

Resisting Adversarial Attacks using Gaussian Mixture Variational Autoencoders

Authors: Partha Ghosh, Arpan Losalka, Michael J Black | Published: 2018-05-31 | Updated: 2018-12-10
Certified Robustness
Loss Function
Adversarial Example

Defending Against Machine Learning Model Stealing Attacks Using Deceptive Perturbations

Authors: Taesung Lee, Benjamin Edwards, Ian Molloy, Dong Su | Published: 2018-05-31 | Updated: 2018-12-13
Certified Robustness
Detection of Model Extraction Attacks
Watermark Evaluation