Attack of the Tails: Yes, You Really Can Backdoor Federated Learning Authors: Hongyi Wang, Kartik Sreenivasan, Shashank Rajput, Harit Vishwakarma, Saurabh Agarwal, Jy-yong Sohn, Kangwook Lee, Dimitris Papailiopoulos | Published: 2020-07-09 PoisoningModel RobustnessAttack Method 2020.07.09 2025.05.28 Literature Database
Evaluation of Adversarial Training on Different Types of Neural Networks in Deep Learning-based IDSs Authors: Rana Abou Khamis, Ashraf Matrawy | Published: 2020-07-08 PoisoningFactors of Performance DegradationAdversarial Training 2020.07.08 2025.05.28 Literature Database
On the relationship between class selectivity, dimensionality, and robustness Authors: Matthew L. Leavitt, Ari S. Morcos | Published: 2020-07-08 | Updated: 2020-10-13 PoisoningAdversarial LearningVulnerability Analysis 2020.07.08 2025.05.28 Literature Database
Backdoor attacks and defenses in feature-partitioned collaborative learning Authors: Yang Liu, Zhihao Yi, Tianjian Chen | Published: 2020-07-07 PoisoningAdversarial LearningDefense Mechanism 2020.07.07 2025.05.28 Literature Database
On Data Augmentation and Adversarial Risk: An Empirical Analysis Authors: Hamid Eghbal-zadeh, Khaled Koutini, Paul Primus, Verena Haunschmid, Michal Lewandowski, Werner Zellinger, Bernhard A. Moser, Gerhard Widmer | Published: 2020-07-06 PoisoningRisk ManagementAdversarial Learning 2020.07.06 2025.05.28 Literature Database
Opportunities and Challenges in Deep Learning Adversarial Robustness: A Survey Authors: Samuel Henrique Silva, Peyman Najafirad | Published: 2020-07-01 | Updated: 2020-07-03 PoisoningAdversarial ExampleAdversarial attack 2020.07.01 2025.05.28 Literature Database
Adversarial Deep Ensemble: Evasion Attacks and Defenses for Malware Detection Authors: Deqiang Li, Qianmu Li | Published: 2020-06-30 PoisoningMalware EvolutionAdversarial attack 2020.06.30 2025.05.28 Literature Database
Model-Targeted Poisoning Attacks with Provable Convergence Authors: Fnu Suya, Saeed Mahloujifar, Anshuman Suri, David Evans, Yuan Tian | Published: 2020-06-30 | Updated: 2021-04-21 Backdoor AttackPoisoningAttack Scenario Analysis 2020.06.30 2025.05.28 Literature Database
Orthogonal Deep Models As Defense Against Black-Box Attacks Authors: Mohammad A. A. K. Jalwana, Naveed Akhtar, Mohammed Bennamoun, Ajmal Mian | Published: 2020-06-26 PoisoningAdversarial ExampleAdversarial attack 2020.06.26 2025.05.28 Literature Database
Deep Partition Aggregation: Provable Defense against General Poisoning Attacks Authors: Alexander Levine, Soheil Feizi | Published: 2020-06-26 | Updated: 2021-03-18 Algorithm DesignPoisoningDefense Mechanism 2020.06.26 2025.05.28 Literature Database