Prompt Injection

TRAP: Targeted Random Adversarial Prompt Honeypot for Black-Box Identification

Authors: Martin Gubri, Dennis Ulmer, Hwaran Lee, Sangdoo Yun, Seong Joon Oh | Published: 2024-02-20 | Updated: 2024-06-06
LLM Security
LLM Performance Evaluation
Prompt Injection

Prompt Stealing Attacks Against Large Language Models

Authors: Zeyang Sha, Yang Zhang | Published: 2024-02-20
LLM Security
Prompt Injection
Prompt Engineering

Robust CLIP: Unsupervised Adversarial Fine-Tuning of Vision Embeddings for Robust Large Vision-Language Models

Authors: Christian Schlarmann, Naman Deep Singh, Francesco Croce, Matthias Hein | Published: 2024-02-19 | Updated: 2024-06-05
Prompt Injection
Robustness Evaluation
Adversarial Training

An Empirical Evaluation of LLMs for Solving Offensive Security Challenges

Authors: Minghao Shao, Boyuan Chen, Sofija Jancheska, Brendan Dolan-Gavitt, Siddharth Garg, Ramesh Karri, Muhammad Shafique | Published: 2024-02-19
LLM Performance Evaluation
Prompt Injection
Educational CTF

SPML: A DSL for Defending Language Models Against Prompt Attacks

Authors: Reshabh K Sharma, Vinayak Gupta, Dan Grossman | Published: 2024-02-19
LLM Security
System Prompt Generation
Prompt Injection

Using Hallucinations to Bypass GPT4’s Filter

Authors: Benjamin Lemkin | Published: 2024-02-16 | Updated: 2024-03-11
LLM Security
Prompt Injection
Inappropriate Content Generation

AbuseGPT: Abuse of Generative AI ChatBots to Create Smishing Campaigns

Authors: Ashfak Md Shibli, Mir Mehedi A. Pritom, Maanak Gupta | Published: 2024-02-15
Abuse of AI Chatbots
Cyber Attack
Prompt Injection

PAL: Proxy-Guided Black-Box Attack on Large Language Models

Authors: Chawin Sitawarin, Norman Mu, David Wagner, Alexandre Araujo | Published: 2024-02-15
LLM Security
Prompt Injection
Attack Method

Copyright Traps for Large Language Models

Authors: Matthieu Meeus, Igor Shilov, Manuel Faysse, Yves-Alexandre de Montjoye | Published: 2024-02-14 | Updated: 2024-06-04
Trap Sequence Generation
Prompt Injection
Copyright Trap

Agent Smith: A Single Image Can Jailbreak One Million Multimodal LLM Agents Exponentially Fast

Authors: Xiangming Gu, Xiaosen Zheng, Tianyu Pang, Chao Du, Qian Liu, Ye Wang, Jing Jiang, Min Lin | Published: 2024-02-13 | Updated: 2024-06-03
LLM Security
Prompt Injection
Adversarial Attack Detection