Poisoning

Attack of the Tails: Yes, You Really Can Backdoor Federated Learning

Authors: Hongyi Wang, Kartik Sreenivasan, Shashank Rajput, Harit Vishwakarma, Saurabh Agarwal, Jy-yong Sohn, Kangwook Lee, Dimitris Papailiopoulos | Published: 2020-07-09
Poisoning
Model Robustness
Attack Method

Evaluation of Adversarial Training on Different Types of Neural Networks in Deep Learning-based IDSs

Authors: Rana Abou Khamis, Ashraf Matrawy | Published: 2020-07-08
Poisoning
Factors of Performance Degradation
Adversarial Training

On the relationship between class selectivity, dimensionality, and robustness

Authors: Matthew L. Leavitt, Ari S. Morcos | Published: 2020-07-08 | Updated: 2020-10-13
Poisoning
Adversarial Learning
Vulnerability Analysis

Backdoor attacks and defenses in feature-partitioned collaborative learning

Authors: Yang Liu, Zhihao Yi, Tianjian Chen | Published: 2020-07-07
Poisoning
Adversarial Learning
Defense Mechanism

On Data Augmentation and Adversarial Risk: An Empirical Analysis

Authors: Hamid Eghbal-zadeh, Khaled Koutini, Paul Primus, Verena Haunschmid, Michal Lewandowski, Werner Zellinger, Bernhard A. Moser, Gerhard Widmer | Published: 2020-07-06
Poisoning
Risk Management
Adversarial Learning

Opportunities and Challenges in Deep Learning Adversarial Robustness: A Survey

Authors: Samuel Henrique Silva, Peyman Najafirad | Published: 2020-07-01 | Updated: 2020-07-03
Poisoning
Adversarial Example
Adversarial attack

Adversarial Deep Ensemble: Evasion Attacks and Defenses for Malware Detection

Authors: Deqiang Li, Qianmu Li | Published: 2020-06-30
Poisoning
Malware Evolution
Adversarial attack

Model-Targeted Poisoning Attacks with Provable Convergence

Authors: Fnu Suya, Saeed Mahloujifar, Anshuman Suri, David Evans, Yuan Tian | Published: 2020-06-30 | Updated: 2021-04-21
Backdoor Attack
Poisoning
Attack Scenario Analysis

Orthogonal Deep Models As Defense Against Black-Box Attacks

Authors: Mohammad A. A. K. Jalwana, Naveed Akhtar, Mohammed Bennamoun, Ajmal Mian | Published: 2020-06-26
Poisoning
Adversarial Example
Adversarial attack

Deep Partition Aggregation: Provable Defense against General Poisoning Attacks

Authors: Alexander Levine, Soheil Feizi | Published: 2020-06-26 | Updated: 2021-03-18
Algorithm Design
Poisoning
Defense Mechanism