Poisoning

PDA: Progressive Data Augmentation for General Robustness of Deep Neural Networks

Authors: Hang Yu, Aishan Liu, Xianglong Liu, Gengchao Li, Ping Luo, Ran Cheng, Jichen Yang, Chongzhi Zhang | Published: 2019-09-11 | Updated: 2020-02-24
Poisoning
Model Robustness
Attack Method

When Explainability Meets Adversarial Learning: Detecting Adversarial Examples using SHAP Signatures

Authors: Gil Fidel, Ron Bitton, Asaf Shabtai | Published: 2019-09-08
Poisoning
Adversarial Example
Adversarial Example Detection

Spatiotemporally Constrained Action Space Attacks on Deep Reinforcement Learning Agents

Authors: Xian Yeow Lee, Sambit Ghadai, Kai Liang Tan, Chinmay Hegde, Soumik Sarkar | Published: 2019-09-05 | Updated: 2019-11-19
Poisoning
Attack Pattern Extraction
Adversarial Training

Metric Learning for Adversarial Robustness

Authors: Chengzhi Mao, Ziyuan Zhong, Junfeng Yang, Carl Vondrick, Baishakhi Ray | Published: 2019-09-03 | Updated: 2019-10-28
Poisoning
Improvement of Learning
Vulnerability of Adversarial Examples

Universal, transferable and targeted adversarial attacks

Authors: Junde Wu, Rao Fu | Published: 2019-08-29 | Updated: 2022-06-13
Poisoning
Adversarial Example
Adversarial Attack Detection

Transferring Robustness for Graph Neural Network Against Poisoning Attacks

Authors: Xianfeng Tang, Yandong Li, Yiwei Sun, Huaxiu Yao, Prasenjit Mitra, Suhang Wang | Published: 2019-08-20 | Updated: 2020-02-26
Poisoning
Robustness Improvement Method
Content Specialized for Toxicity Attacks

Protecting Neural Networks with Hierarchical Random Switching: Towards Better Robustness-Accuracy Trade-off for Stochastic Defenses

Authors: Xiao Wang, Siyue Wang, Pin-Yu Chen, Yanzhi Wang, Brian Kulis, Xue Lin, Peter Chin | Published: 2019-08-20
Poisoning
Robustness Improvement Method
Adversarial Attack Methods

On Defending Against Label Flipping Attacks on Malware Detection Systems

Authors: Rahim Taheri, Reza Javidan, Mohammad Shojafar, Zahra Pooranian, Ali Miri, Mauro Conti | Published: 2019-08-13 | Updated: 2020-06-16
Poisoning
Adversarial Attack Methods
Computational Complexity

On the Design of Black-box Adversarial Examples by Leveraging Gradient-free Optimization and Operator Splitting Method

Authors: Pu Zhao, Sijia Liu, Pin-Yu Chen, Nghia Hoang, Kaidi Xu, Bhavya Kailkhura, Xue Lin | Published: 2019-07-26 | Updated: 2019-12-04
Poisoning
Effective Perturbation Methods
Adversarial Transferability

Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Authors: Yuxin Ma, Tiankai Xie, Jundong Li, Ross Maciejewski | Published: 2019-07-17 | Updated: 2019-10-03
Backdoor Attack
Poisoning
Adversarial Attack Methods