Backdoor Attack

Mitigating Label Flipping Attacks in Malicious URL Detectors Using Ensemble Trees

Authors: Ehsan Nowroozi, Nada Jadalla, Samaneh Ghelichkhani, Alireza Jolfaei | Published: 2024-03-05
Backdoor Attack
Poisoning
Defense Method

Federated Learning Under Attack: Exposing Vulnerabilities through Data Poisoning Attacks in Computer Networks

Authors: Ehsan Nowroozi, Imran Haider, Rahim Taheri, Mauro Conti | Published: 2024-03-05
Backdoor Attack
Poisoning
Federated Learning

Teach LLMs to Phish: Stealing Private Information from Language Models

Authors: Ashwinee Panda, Christopher A. Choquette-Choo, Zhengming Zhang, Yaoqing Yang, Prateek Mittal | Published: 2024-03-01
Backdoor Attack
Phishing Detection
Prompt Injection

Learning to Poison Large Language Models for Downstream Manipulation

Authors: Xiangyu Zhou, Yao Qiang, Saleh Zare Zade, Mohammad Amin Roshani, Prashant Khanduri, Douglas Zytko, Dongxiao Zhu | Published: 2024-02-21 | Updated: 2025-05-29
LLM Security
Backdoor Attack
Poisoning Attack

Indiscriminate Data Poisoning Attacks on Pre-trained Feature Extractors

Authors: Yiwei Lu, Matthew Y. R. Yang, Gautam Kamath, Yaoliang Yu | Published: 2024-02-20
Backdoor Attack
Poisoning
Transfer Learning

Test-Time Backdoor Attacks on Multimodal Large Language Models

Authors: Dong Lu, Tianyu Pang, Chao Du, Qian Liu, Xianjun Yang, Min Lin | Published: 2024-02-13
Backdoor Attack
Model Performance Evaluation
Attack Method

Game-Theoretic Unlearnable Example Generator

Authors: Shuang Liu, Yihan Wang, Xiao-Shan Gao | Published: 2024-01-31
Watermarking
Backdoor Attack
Poisoning

Decentralized Federated Learning: A Survey on Security and Privacy

Authors: Ehsan Hallaji, Roozbeh Razavi-Far, Mehrdad Saif, Boyu Wang, Qiang Yang | Published: 2024-01-25
Attack Methods against DFL
Backdoor Attack
Privacy Protection Method

Unraveling Attacks in Machine Learning-based IoT Ecosystems: A Survey and the Open Libraries Behind Them

Authors: Chao Liu, Boxi Chen, Wei Shao, Chris Zhang, Kelvin Wong, Yi Zhang | Published: 2024-01-22 | Updated: 2024-01-27
Backdoor Attack
Privacy Protection Method
Membership Inference

BadChain: Backdoor Chain-of-Thought Prompting for Large Language Models

Authors: Zhen Xiang, Fengqing Jiang, Zidi Xiong, Bhaskar Ramasubramanian, Radha Poovendran, Bo Li | Published: 2024-01-20
LLM Performance Evaluation
Backdoor Attack
Prompt Injection