TRAP: Targeted Random Adversarial Prompt Honeypot for Black-Box Identification Authors: Martin Gubri, Dennis Ulmer, Hwaran Lee, Sangdoo Yun, Seong Joon Oh | Published: 2024-02-20 | Updated: 2024-06-06 LLM SecurityLLM Performance EvaluationPrompt Injection 2024.02.20 2025.05.27 Literature Database
Prompt Stealing Attacks Against Large Language Models Authors: Zeyang Sha, Yang Zhang | Published: 2024-02-20 LLM SecurityPrompt InjectionPrompt Engineering 2024.02.20 2025.05.27 Literature Database
Robust CLIP: Unsupervised Adversarial Fine-Tuning of Vision Embeddings for Robust Large Vision-Language Models Authors: Christian Schlarmann, Naman Deep Singh, Francesco Croce, Matthias Hein | Published: 2024-02-19 | Updated: 2024-06-05 Prompt InjectionRobustness EvaluationAdversarial Training 2024.02.19 2025.05.27 Literature Database
An Empirical Evaluation of LLMs for Solving Offensive Security Challenges Authors: Minghao Shao, Boyuan Chen, Sofija Jancheska, Brendan Dolan-Gavitt, Siddharth Garg, Ramesh Karri, Muhammad Shafique | Published: 2024-02-19 LLM Performance EvaluationPrompt InjectionEducational CTF 2024.02.19 2025.05.27 Literature Database
SPML: A DSL for Defending Language Models Against Prompt Attacks Authors: Reshabh K Sharma, Vinayak Gupta, Dan Grossman | Published: 2024-02-19 LLM SecuritySystem Prompt GenerationPrompt Injection 2024.02.19 2025.05.27 Literature Database
Using Hallucinations to Bypass GPT4’s Filter Authors: Benjamin Lemkin | Published: 2024-02-16 | Updated: 2024-03-11 LLM SecurityPrompt InjectionInappropriate Content Generation 2024.02.16 2025.05.27 Literature Database
AbuseGPT: Abuse of Generative AI ChatBots to Create Smishing Campaigns Authors: Ashfak Md Shibli, Mir Mehedi A. Pritom, Maanak Gupta | Published: 2024-02-15 Abuse of AI ChatbotsCyber AttackPrompt Injection 2024.02.15 2025.05.27 Literature Database
PAL: Proxy-Guided Black-Box Attack on Large Language Models Authors: Chawin Sitawarin, Norman Mu, David Wagner, Alexandre Araujo | Published: 2024-02-15 LLM SecurityPrompt InjectionAttack Method 2024.02.15 2025.05.27 Literature Database
Copyright Traps for Large Language Models Authors: Matthieu Meeus, Igor Shilov, Manuel Faysse, Yves-Alexandre de Montjoye | Published: 2024-02-14 | Updated: 2024-06-04 Trap Sequence GenerationPrompt InjectionCopyright Trap 2024.02.14 2025.05.27 Literature Database
Agent Smith: A Single Image Can Jailbreak One Million Multimodal LLM Agents Exponentially Fast Authors: Xiangming Gu, Xiaosen Zheng, Tianyu Pang, Chao Du, Qian Liu, Ye Wang, Jing Jiang, Min Lin | Published: 2024-02-13 | Updated: 2024-06-03 LLM SecurityPrompt InjectionAdversarial Attack Detection 2024.02.13 2025.05.27 Literature Database