Backdoor Attack

Machine Unlearning: Taxonomy, Metrics, Applications, Challenges, and Prospects

Authors: Na Li, Chunyi Zhou, Yansong Gao, Hui Chen, Anmin Fu, Zhi Zhang, Yu Shui | Published: 2024-03-13
Backdoor Attack
Membership Inference
Machine Unlearning

Poisoning Programs by Un-Repairing Code: Security Concerns of AI-generated Code

Authors: Cristina Improta | Published: 2024-03-11
Security Analysis
Backdoor Attack
Poisoning

Fake or Compromised? Making Sense of Malicious Clients in Federated Learning

Authors: Hamid Mozaffari, Sunav Choudhary, Amir Houmansadr | Published: 2024-03-10
Backdoor Attack
Poisoning
Malicious Client

On Protecting the Data Privacy of Large Language Models (LLMs): A Survey

Authors: Biwei Yan, Kun Li, Minghui Xu, Yueyan Dong, Yue Zhang, Zhaochun Ren, Xiuzhen Cheng | Published: 2024-03-08 | Updated: 2024-03-14
Backdoor Attack
Privacy Protection Method
Prompt Injection

Mitigating Label Flipping Attacks in Malicious URL Detectors Using Ensemble Trees

Authors: Ehsan Nowroozi, Nada Jadalla, Samaneh Ghelichkhani, Alireza Jolfaei | Published: 2024-03-05
Backdoor Attack
Poisoning
Defense Method

Federated Learning Under Attack: Exposing Vulnerabilities through Data Poisoning Attacks in Computer Networks

Authors: Ehsan Nowroozi, Imran Haider, Rahim Taheri, Mauro Conti | Published: 2024-03-05
Backdoor Attack
Poisoning
Federated Learning

Teach LLMs to Phish: Stealing Private Information from Language Models

Authors: Ashwinee Panda, Christopher A. Choquette-Choo, Zhengming Zhang, Yaoqing Yang, Prateek Mittal | Published: 2024-03-01
Backdoor Attack
Phishing Detection
Prompt Injection

Learning to Poison Large Language Models for Downstream Manipulation

Authors: Xiangyu Zhou, Yao Qiang, Saleh Zare Zade, Mohammad Amin Roshani, Prashant Khanduri, Douglas Zytko, Dongxiao Zhu | Published: 2024-02-21 | Updated: 2025-05-29
LLM Security
Backdoor Attack
Poisoning Attack

Indiscriminate Data Poisoning Attacks on Pre-trained Feature Extractors

Authors: Yiwei Lu, Matthew Y. R. Yang, Gautam Kamath, Yaoliang Yu | Published: 2024-02-20
Backdoor Attack
Poisoning
Transfer Learning

Test-Time Backdoor Attacks on Multimodal Large Language Models

Authors: Dong Lu, Tianyu Pang, Chao Du, Qian Liu, Xianjun Yang, Min Lin | Published: 2024-02-13
Backdoor Attack
Model Performance Evaluation
Attack Method