Literature Database

Policy Teaching via Environment Poisoning: Training-time Adversarial Attacks against Reinforcement Learning

Authors: Amin Rakhsha, Goran Radanovic, Rati Devidze, Xiaojin Zhu, Adish Singla | Published: 2020-03-28 | Updated: 2020-08-19
Toxicity of Rewards
Reinforcement Learning
Attack Type

Adaptive Reward-Poisoning Attacks against Reinforcement Learning

Authors: Xuezhou Zhang, Yuzhe Ma, Adish Singla, Xiaojin Zhu | Published: 2020-03-27 | Updated: 2020-06-22
Q-Learning Algorithm
Backdoor Attack
Reinforcement Learning Attack

A Separation Result Between Data-oblivious and Data-aware Poisoning Attacks

Authors: Samuel Deng, Sanjam Garg, Somesh Jha, Saeed Mahloujifar, Mohammad Mahmoody, Abhradeep Thakurta | Published: 2020-03-26 | Updated: 2021-12-13
Poisoning
Adversarial Attack Methods
Vulnerability Attack Method

Adversarial Perturbations Fool Deepfake Detectors

Authors: Apurva Gandhi, Shomik Jain | Published: 2020-03-24 | Updated: 2020-05-15
Adversarial Example
Adversarial Attack Methods
Defense Method

Systematic Evaluation of Privacy Risks of Machine Learning Models

Authors: Liwei Song, Prateek Mittal | Published: 2020-03-24 | Updated: 2020-12-09
Privacy Protection Method
Membership Inference
Defense Method

DYSAN: Dynamically sanitizing motion sensor data against sensitive inferences through adversarial networks

Authors: Claude Rosin Ngueveu, Antoine Boutet, Carole Frindel, Sébastien Gambs, Théo Jourdan, Claude Rosin | Published: 2020-03-23 | Updated: 2020-10-08
Training Method
Privacy Protection Method
User Activity Analysis

FTT-NAS: Discovering Fault-Tolerant Convolutional Neural Architecture

Authors: Xuefei Ning, Guangjun Ge, Wenshuo Li, Zhenhua Zhu, Yin Zheng, Xiaoming Chen, Zhen Gao, Yu Wang, Huazhong Yang | Published: 2020-03-20 | Updated: 2021-04-12
Robustness
Vulnerability detection
Weight Update Method

One Neuron to Fool Them All

Authors: Anshuman Suri, David Evans | Published: 2020-03-20 | Updated: 2020-06-09
Training Method
Robustness
Adversarial Example

Robust Deep Reinforcement Learning against Adversarial Perturbations on State Observations

Authors: Huan Zhang, Hongge Chen, Chaowei Xiao, Bo Li, Mingyan Liu, Duane Boning, Cho-Jui Hsieh | Published: 2020-03-19 | Updated: 2021-07-14
Training Method
Hyperparameter Optimization
Robustness

RAB: Provable Robustness Against Backdoor Attacks

Authors: Maurice Weber, Xiaojun Xu, Bojan Karlaš, Ce Zhang, Bo Li | Published: 2020-03-19 | Updated: 2023-08-03
Backdoor Attack
Robustness
Adversarial Example