Poisoning

Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive Review

Authors: Yansong Gao, Bao Gia Doan, Zhi Zhang, Siqi Ma, Jiliang Zhang, Anmin Fu, Surya Nepal, Hyoungshick Kim | Published: 2020-07-21 | Updated: 2020-08-02
Backdoor Attack
Poisoning
Attack Method

Adversarial Immunization for Certifiable Robustness on Graphs

Authors: Shuchang Tao, Huawei Shen, Qi Cao, Liang Hou, Xueqi Cheng | Published: 2020-07-19 | Updated: 2021-08-25
Graph Transformation
Poisoning
Computational Complexity

Data Poisoning Attacks Against Federated Learning Systems

Authors: Vale Tolpegin, Stacey Truex, Mehmet Emre Gursoy, Ling Liu | Published: 2020-07-16 | Updated: 2020-08-11
Poisoning
Performance Evaluation
Attack Method

A simple defense against adversarial attacks on heatmap explanations

Authors: Laura Rieger, Lars Kai Hansen | Published: 2020-07-13
Poisoning
Attack Method
Defense Mechanism

Mitigating backdoor attacks in LSTM-based Text Classification Systems by Backdoor Keyword Identification

Authors: Chuanshuai Chen, Jiazhu Dai | Published: 2020-07-11 | Updated: 2021-03-15
Text Generation Method
Backdoor Attack
Poisoning

Improving Adversarial Robustness by Enforcing Local and Global Compactness

Authors: Anh Bui, Trung Le, He Zhao, Paul Montague, Olivier deVel, Tamas Abraham, Dinh Phung | Published: 2020-07-10
Poisoning
Performance Evaluation
Deep Learning

Attack of the Tails: Yes, You Really Can Backdoor Federated Learning

Authors: Hongyi Wang, Kartik Sreenivasan, Shashank Rajput, Harit Vishwakarma, Saurabh Agarwal, Jy-yong Sohn, Kangwook Lee, Dimitris Papailiopoulos | Published: 2020-07-09
Poisoning
Model Robustness
Attack Method

Evaluation of Adversarial Training on Different Types of Neural Networks in Deep Learning-based IDSs

Authors: Rana Abou Khamis, Ashraf Matrawy | Published: 2020-07-08
Poisoning
Factors of Performance Degradation
Adversarial Training

On the relationship between class selectivity, dimensionality, and robustness

Authors: Matthew L. Leavitt, Ari S. Morcos | Published: 2020-07-08 | Updated: 2020-10-13
Poisoning
Adversarial Learning
Vulnerability Analysis

Backdoor attacks and defenses in feature-partitioned collaborative learning

Authors: Yang Liu, Zhihao Yi, Tianjian Chen | Published: 2020-07-07
Poisoning
Adversarial Learning
Defense Mechanism