Coercing LLMs to do and reveal (almost) anything Authors: Jonas Geiping, Alex Stein, Manli Shu, Khalid Saifullah, Yuxin Wen, Tom Goldstein | Published: 2024-02-21 LLM SecurityPrompt InjectionAttack Method 2024.02.21 2025.05.27 Literature Database
A Comprehensive Study of Jailbreak Attack versus Defense for Large Language Models Authors: Zihao Xu, Yi Liu, Gelei Deng, Yuekang Li, Stjepan Picek | Published: 2024-02-21 | Updated: 2024-05-17 LLM SecurityPrompt InjectionDefense Method 2024.02.21 2025.05.27 Literature Database
TRAP: Targeted Random Adversarial Prompt Honeypot for Black-Box Identification Authors: Martin Gubri, Dennis Ulmer, Hwaran Lee, Sangdoo Yun, Seong Joon Oh | Published: 2024-02-20 | Updated: 2024-06-06 LLM SecurityLLM Performance EvaluationPrompt Injection 2024.02.20 2025.05.27 Literature Database
Prompt Stealing Attacks Against Large Language Models Authors: Zeyang Sha, Yang Zhang | Published: 2024-02-20 LLM SecurityPrompt InjectionPrompt Engineering 2024.02.20 2025.05.27 Literature Database
Robust CLIP: Unsupervised Adversarial Fine-Tuning of Vision Embeddings for Robust Large Vision-Language Models Authors: Christian Schlarmann, Naman Deep Singh, Francesco Croce, Matthias Hein | Published: 2024-02-19 | Updated: 2024-06-05 Prompt InjectionRobustness EvaluationAdversarial Training 2024.02.19 2025.05.27 Literature Database
An Empirical Evaluation of LLMs for Solving Offensive Security Challenges Authors: Minghao Shao, Boyuan Chen, Sofija Jancheska, Brendan Dolan-Gavitt, Siddharth Garg, Ramesh Karri, Muhammad Shafique | Published: 2024-02-19 LLM Performance EvaluationPrompt InjectionEducational CTF 2024.02.19 2025.05.27 Literature Database
SPML: A DSL for Defending Language Models Against Prompt Attacks Authors: Reshabh K Sharma, Vinayak Gupta, Dan Grossman | Published: 2024-02-19 LLM SecuritySystem Prompt GenerationPrompt Injection 2024.02.19 2025.05.27 Literature Database
Using Hallucinations to Bypass GPT4’s Filter Authors: Benjamin Lemkin | Published: 2024-02-16 | Updated: 2024-03-11 LLM SecurityPrompt InjectionInappropriate Content Generation 2024.02.16 2025.05.27 Literature Database
AbuseGPT: Abuse of Generative AI ChatBots to Create Smishing Campaigns Authors: Ashfak Md Shibli, Mir Mehedi A. Pritom, Maanak Gupta | Published: 2024-02-15 Abuse of AI ChatbotsCyber AttackPrompt Injection 2024.02.15 2025.05.27 Literature Database
PAL: Proxy-Guided Black-Box Attack on Large Language Models Authors: Chawin Sitawarin, Norman Mu, David Wagner, Alexandre Araujo | Published: 2024-02-15 LLM SecurityPrompt InjectionAttack Method 2024.02.15 2025.05.27 Literature Database