On Trojan Signatures in Large Language Models of Code Authors: Aftab Hussain, Md Rafiqul Islam Rabin, Mohammad Amin Alipour | Published: 2024-02-23 | Updated: 2024-03-07 LLM SecurityTrojan Horse SignatureTrojan Detection 2024.02.23 2025.05.27 Literature Database
Coercing LLMs to do and reveal (almost) anything Authors: Jonas Geiping, Alex Stein, Manli Shu, Khalid Saifullah, Yuxin Wen, Tom Goldstein | Published: 2024-02-21 LLM SecurityPrompt InjectionAttack Method 2024.02.21 2025.05.27 Literature Database
Learning to Poison Large Language Models for Downstream Manipulation Authors: Xiangyu Zhou, Yao Qiang, Saleh Zare Zade, Mohammad Amin Roshani, Prashant Khanduri, Douglas Zytko, Dongxiao Zhu | Published: 2024-02-21 | Updated: 2025-05-29 LLM SecurityBackdoor AttackPoisoning Attack 2024.02.21 2025.05.31 Literature Database
A Comprehensive Study of Jailbreak Attack versus Defense for Large Language Models Authors: Zihao Xu, Yi Liu, Gelei Deng, Yuekang Li, Stjepan Picek | Published: 2024-02-21 | Updated: 2024-05-17 LLM SecurityPrompt InjectionDefense Method 2024.02.21 2025.05.27 Literature Database
The Wolf Within: Covert Injection of Malice into MLLM Societies via an MLLM Operative Authors: Zhen Tan, Chengshuai Zhao, Raha Moraffah, Yifan Li, Yu Kong, Tianlong Chen, Huan Liu | Published: 2024-02-20 | Updated: 2024-06-03 LLM SecurityClassification of Malicious ActorsAttack Method 2024.02.20 2025.05.27 Literature Database
TRAP: Targeted Random Adversarial Prompt Honeypot for Black-Box Identification Authors: Martin Gubri, Dennis Ulmer, Hwaran Lee, Sangdoo Yun, Seong Joon Oh | Published: 2024-02-20 | Updated: 2024-06-06 LLM SecurityLLM Performance EvaluationPrompt Injection 2024.02.20 2025.05.27 Literature Database
Prompt Stealing Attacks Against Large Language Models Authors: Zeyang Sha, Yang Zhang | Published: 2024-02-20 LLM SecurityPrompt InjectionPrompt Engineering 2024.02.20 2025.05.27 Literature Database
SPML: A DSL for Defending Language Models Against Prompt Attacks Authors: Reshabh K Sharma, Vinayak Gupta, Dan Grossman | Published: 2024-02-19 LLM SecuritySystem Prompt GenerationPrompt Injection 2024.02.19 2025.05.27 Literature Database
Using Hallucinations to Bypass GPT4’s Filter Authors: Benjamin Lemkin | Published: 2024-02-16 | Updated: 2024-03-11 LLM SecurityPrompt InjectionInappropriate Content Generation 2024.02.16 2025.05.27 Literature Database
PAL: Proxy-Guided Black-Box Attack on Large Language Models Authors: Chawin Sitawarin, Norman Mu, David Wagner, Alexandre Araujo | Published: 2024-02-15 LLM SecurityPrompt InjectionAttack Method 2024.02.15 2025.05.27 Literature Database