LLM Security

On Trojan Signatures in Large Language Models of Code

Authors: Aftab Hussain, Md Rafiqul Islam Rabin, Mohammad Amin Alipour | Published: 2024-02-23 | Updated: 2024-03-07
LLM Security
Trojan Horse Signature
Trojan Detection

Coercing LLMs to do and reveal (almost) anything

Authors: Jonas Geiping, Alex Stein, Manli Shu, Khalid Saifullah, Yuxin Wen, Tom Goldstein | Published: 2024-02-21
LLM Security
Prompt Injection
Attack Method

Learning to Poison Large Language Models for Downstream Manipulation

Authors: Xiangyu Zhou, Yao Qiang, Saleh Zare Zade, Mohammad Amin Roshani, Prashant Khanduri, Douglas Zytko, Dongxiao Zhu | Published: 2024-02-21 | Updated: 2025-05-29
LLM Security
Backdoor Attack
Poisoning Attack

A Comprehensive Study of Jailbreak Attack versus Defense for Large Language Models

Authors: Zihao Xu, Yi Liu, Gelei Deng, Yuekang Li, Stjepan Picek | Published: 2024-02-21 | Updated: 2024-05-17
LLM Security
Prompt Injection
Defense Method

The Wolf Within: Covert Injection of Malice into MLLM Societies via an MLLM Operative

Authors: Zhen Tan, Chengshuai Zhao, Raha Moraffah, Yifan Li, Yu Kong, Tianlong Chen, Huan Liu | Published: 2024-02-20 | Updated: 2024-06-03
LLM Security
Classification of Malicious Actors
Attack Method

TRAP: Targeted Random Adversarial Prompt Honeypot for Black-Box Identification

Authors: Martin Gubri, Dennis Ulmer, Hwaran Lee, Sangdoo Yun, Seong Joon Oh | Published: 2024-02-20 | Updated: 2024-06-06
LLM Security
LLM Performance Evaluation
Prompt Injection

Prompt Stealing Attacks Against Large Language Models

Authors: Zeyang Sha, Yang Zhang | Published: 2024-02-20
LLM Security
Prompt Injection
Prompt Engineering

SPML: A DSL for Defending Language Models Against Prompt Attacks

Authors: Reshabh K Sharma, Vinayak Gupta, Dan Grossman | Published: 2024-02-19
LLM Security
System Prompt Generation
Prompt Injection

Using Hallucinations to Bypass GPT4’s Filter

Authors: Benjamin Lemkin | Published: 2024-02-16 | Updated: 2024-03-11
LLM Security
Prompt Injection
Inappropriate Content Generation

PAL: Proxy-Guided Black-Box Attack on Large Language Models

Authors: Chawin Sitawarin, Norman Mu, David Wagner, Alexandre Araujo | Published: 2024-02-15
LLM Security
Prompt Injection
Attack Method