Backdoor Attack

Learning to Poison Large Language Models for Downstream Manipulation

Authors: Xiangyu Zhou, Yao Qiang, Saleh Zare Zade, Mohammad Amin Roshani, Prashant Khanduri, Douglas Zytko, Dongxiao Zhu | Published: 2024-02-21 | Updated: 2025-05-29
LLM Security
Backdoor Attack
Poisoning Attack

Indiscriminate Data Poisoning Attacks on Pre-trained Feature Extractors

Authors: Yiwei Lu, Matthew Y. R. Yang, Gautam Kamath, Yaoliang Yu | Published: 2024-02-20
Backdoor Attack
Poisoning
Transfer Learning

Test-Time Backdoor Attacks on Multimodal Large Language Models

Authors: Dong Lu, Tianyu Pang, Chao Du, Qian Liu, Xianjun Yang, Min Lin | Published: 2024-02-13
Backdoor Attack
Model Performance Evaluation
Attack Method

Game-Theoretic Unlearnable Example Generator

Authors: Shuang Liu, Yihan Wang, Xiao-Shan Gao | Published: 2024-01-31
Watermarking
Backdoor Attack
Poisoning

Decentralized Federated Learning: A Survey on Security and Privacy

Authors: Ehsan Hallaji, Roozbeh Razavi-Far, Mehrdad Saif, Boyu Wang, Qiang Yang | Published: 2024-01-25
Attack Methods against DFL
Backdoor Attack
Privacy Protection Method

Unraveling Attacks in Machine Learning-based IoT Ecosystems: A Survey and the Open Libraries Behind Them

Authors: Chao Liu, Boxi Chen, Wei Shao, Chris Zhang, Kelvin Wong, Yi Zhang | Published: 2024-01-22 | Updated: 2024-01-27
Backdoor Attack
Privacy Protection Method
Membership Inference

BadChain: Backdoor Chain-of-Thought Prompting for Large Language Models

Authors: Zhen Xiang, Fengqing Jiang, Zidi Xiong, Bhaskar Ramasubramanian, Radha Poovendran, Bo Li | Published: 2024-01-20
LLM Performance Evaluation
Backdoor Attack
Prompt Injection

Universal Vulnerabilities in Large Language Models: Backdoor Attacks for In-context Learning

Authors: Shuai Zhao, Meihuizi Jia, Luu Anh Tuan, Fengjun Pan, Jinming Wen | Published: 2024-01-11 | Updated: 2024-10-09
Backdoor Attack
Prompt Injection

Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training

Authors: Evan Hubinger, Carson Denison, Jesse Mu, Mike Lambert, Meg Tong, Monte MacDiarmid, Tamera Lanham, Daniel M. Ziegler, Tim Maxwell, Newton Cheng, Adam Jermyn, Amanda Askell, Ansh Radhakrishnan, Cem Anil, David Duvenaud, Deep Ganguli, Fazl Barez, Jack Clark, Kamal Ndousse, Kshitij Sachan, Michael Sellitto, Mrinank Sharma, Nova DasSarma, Roger Grosse, Shauna Kravec, Yuntao Bai, Zachary Witten, Marina Favaro, Jan Brauner, Holden Karnofsky, Paul Christiano, Samuel R. Bowman, Logan Graham, Jared Kaplan, Sören Mindermann, Ryan Greenblatt, Buck Shlegeris, Nicholas Schiefer, Ethan Perez | Published: 2024-01-10 | Updated: 2024-01-17
Backdoor Attack
Prompt Injection
Reinforcement Learning

MalModel: Hiding Malicious Payload in Mobile Deep Learning Models with Black-box Backdoor Attack

Authors: Jiayi Hua, Kailong Wang, Meizhen Wang, Guangdong Bai, Xiapu Luo, Haoyu Wang | Published: 2024-01-05
Backdoor Attack
Malware Classification
Model Performance Evaluation