Backdoor Attack

Provable Robustness of (Graph) Neural Networks Against Data Poisoning and Backdoor Attacks

Authors: Lukas Gosch, Mahalakshmi Sabanayagam, Debarghya Ghoshdastidar, Stephan Günnemann | Published: 2024-07-15 | Updated: 2024-10-14
Backdoor Attack
Poisoning
Optimization Problem

Model-agnostic clean-label backdoor mitigation in cybersecurity environments

Authors: Giorgio Severi, Simona Boboila, John Holodnak, Kendra Kratkiewicz, Rauf Izmailov, Michael J. De Lucia, Alina Oprea | Published: 2024-07-11 | Updated: 2025-05-05
Backdoor Detection
Backdoor Attack
Defense Mechanism

CleanGen: Mitigating Backdoor Attacks for Generation Tasks in Large Language Models

Authors: Yuetai Li, Zhangchen Xu, Fengqing Jiang, Luyao Niu, Dinuka Sahabandu, Bhaskar Ramasubramanian, Radha Poovendran | Published: 2024-06-18 | Updated: 2025-03-27
LLM Security
Backdoor Attack
Prompt Injection

Trading Devil: Robust backdoor attack via Stochastic investment models and Bayesian approach

Authors: Orson Mengara | Published: 2024-06-15 | Updated: 2024-09-16
Backdoor Attack
Financial Intelligence

RMF: A Risk Measurement Framework for Machine Learning Models

Authors: Jan Schröder, Jakub Breier | Published: 2024-06-15
Backdoor Attack
Poisoning
Risk Management

A Study of Backdoors in Instruction Fine-tuned Language Models

Authors: Jayaram Raghuram, George Kesidis, David J. Miller | Published: 2024-06-12 | Updated: 2024-08-21
LLM Security
Backdoor Attack
Defense Method

A Survey of Recent Backdoor Attacks and Defenses in Large Language Models

Authors: Shuai Zhao, Meihuizi Jia, Zhongliang Guo, Leilei Gan, Xiaoyu Xu, Xiaobao Wu, Jie Fu, Yichao Feng, Fengjun Pan, Luu Anh Tuan | Published: 2024-06-10 | Updated: 2025-01-04
LLM Security
Backdoor Attack

An LLM-Assisted Easy-to-Trigger Backdoor Attack on Code Completion Models: Injecting Disguised Vulnerabilities against Strong Detection

Authors: Shenao Yan, Shen Wang, Yue Duan, Hanbin Hong, Kiho Lee, Doowon Kim, Yuan Hong | Published: 2024-06-10
LLM Security
Backdoor Attack
Prompt Injection

Lurking in the shadows: Unveiling Stealthy Backdoor Attacks against Personalized Federated Learning

Authors: Xiaoting Lyu, Yufei Han, Wei Wang, Jingkai Liu, Yongsheng Zhu, Guangquan Xu, Jiqiang Liu, Xiangliang Zhang | Published: 2024-06-10
Backdoor Attack
Poisoning

A Survey on Machine Unlearning: Techniques and New Emerged Privacy Risks

Authors: Hengzhu Liu, Ping Xiong, Tianqing Zhu, Philip S. Yu | Published: 2024-06-10
Backdoor Attack
Poisoning
Membership Inference