BadGD: A unified data-centric framework to identify gradient descent vulnerabilities Authors: Chi-Hua Wang, Guang Cheng | Published: 2024-05-24 Backdoor AttackPoisoning 2024.05.24 2025.05.27 Literature Database
A GAN-Based Data Poisoning Attack Against Federated Learning Systems and Its Countermeasure Authors: Wei Sun, Bo Gao, Ke Xiong, Yuwei Wang | Published: 2024-05-19 | Updated: 2024-05-21 Backdoor AttackPoisoningDefense Method 2024.05.19 2025.05.27 Literature Database
Concealing Backdoor Model Updates in Federated Learning by Trigger-Optimized Data Poisoning Authors: Yujie Zhang, Neil Gong, Michael K. Reiter | Published: 2024-05-10 | Updated: 2024-09-09 Backdoor AttackPoisoning 2024.05.10 2025.05.27 Literature Database
Unlearning Backdoor Attacks through Gradient-Based Model Pruning Authors: Kealan Dunnett, Reza Arablouei, Dimity Miller, Volkan Dedeoglu, Raja Jurdak | Published: 2024-05-07 Backdoor AttackModel Performance Evaluation 2024.05.07 2025.05.27 Literature Database
TuBA: Cross-Lingual Transferability of Backdoor Attacks in LLMs with Instruction Tuning Authors: Xuanli He, Jun Wang, Qiongkai Xu, Pasquale Minervini, Pontus Stenetorp, Benjamin I. P. Rubinstein, Trevor Cohn | Published: 2024-04-30 | Updated: 2025-03-17 Content ModerationBackdoor AttackPrompt Injection 2024.04.30 2025.05.27 Literature Database
Watch Out for Your Guidance on Generation! Exploring Conditional Backdoor Attacks against Large Language Models Authors: Jiaming He, Wenbo Jiang, Guanyu Hou, Wenshu Fan, Rui Zhang, Hongwei Li | Published: 2024-04-23 | Updated: 2025-01-08 LLM SecurityBackdoor AttackPoisoning 2024.04.23 2025.05.27 Literature Database
Competition Report: Finding Universal Jailbreak Backdoors in Aligned LLMs Authors: Javier Rando, Francesco Croce, Kryštof Mitka, Stepan Shabalin, Maksym Andriushchenko, Nicolas Flammarion, Florian Tramèr | Published: 2024-04-22 | Updated: 2024-06-06 LLM SecurityBackdoor AttackPrompt Injection 2024.04.22 2025.05.27 Literature Database
Physical Backdoor Attack can Jeopardize Driving with Vision-Large-Language Models Authors: Zhenyang Ni, Rui Ye, Yuxi Wei, Zhen Xiang, Yanfeng Wang, Siheng Chen | Published: 2024-04-19 | Updated: 2024-04-22 Backdoor AttackVulnerabilities in Autonomous Driving Technology 2024.04.19 2025.05.27 Literature Database
Exploring Backdoor Vulnerabilities of Chat Models Authors: Yunzhuo Hao, Wenkai Yang, Yankai Lin | Published: 2024-04-03 Backdoor AttackPrompt Injection 2024.04.03 2025.05.27 Literature Database
Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models Authors: Yuxin Wen, Leo Marchyok, Sanghyun Hong, Jonas Geiping, Tom Goldstein, Nicholas Carlini | Published: 2024-04-01 Backdoor AttackPoisoningMembership Inference 2024.04.01 2025.05.27 Literature Database