Prompt Injection

Using Hallucinations to Bypass GPT4’s Filter

Authors: Benjamin Lemkin | Published: 2024-02-16 | Updated: 2024-03-11
LLM Security
Prompt Injection
Inappropriate Content Generation

AbuseGPT: Abuse of Generative AI ChatBots to Create Smishing Campaigns

Authors: Ashfak Md Shibli, Mir Mehedi A. Pritom, Maanak Gupta | Published: 2024-02-15
Abuse of AI Chatbots
Cyber Attack
Prompt Injection

PAL: Proxy-Guided Black-Box Attack on Large Language Models

Authors: Chawin Sitawarin, Norman Mu, David Wagner, Alexandre Araujo | Published: 2024-02-15
LLM Security
Prompt Injection
Attack Method

Copyright Traps for Large Language Models

Authors: Matthieu Meeus, Igor Shilov, Manuel Faysse, Yves-Alexandre de Montjoye | Published: 2024-02-14 | Updated: 2024-06-04
Trap Sequence Generation
Prompt Injection
Copyright Trap

Agent Smith: A Single Image Can Jailbreak One Million Multimodal LLM Agents Exponentially Fast

Authors: Xiangming Gu, Xiaosen Zheng, Tianyu Pang, Chao Du, Qian Liu, Ye Wang, Jing Jiang, Min Lin | Published: 2024-02-13 | Updated: 2024-06-03
LLM Security
Prompt Injection
Adversarial Attack Detection

Pandora: Jailbreak GPTs by Retrieval Augmented Generation Poisoning

Authors: Gelei Deng, Yi Liu, Kailong Wang, Yuekang Li, Tianwei Zhang, Yang Liu | Published: 2024-02-13
LLM Security
Prompt Injection
Malicious Content Generation

PoisonedRAG: Knowledge Corruption Attacks to Retrieval-Augmented Generation of Large Language Models

Authors: Wei Zou, Runpeng Geng, Binghui Wang, Jinyuan Jia | Published: 2024-02-12 | Updated: 2024-08-13
Prompt Injection
Poisoning
Poisoning Attack

EmojiPrompt: Generative Prompt Obfuscation for Privacy-Preserving Communication with Cloud-based LLMs

Authors: Sam Lin, Wenyue Hua, Zhenting Wang, Mingyu Jin, Lizhou Fan, Yongfeng Zhang | Published: 2024-02-08 | Updated: 2025-03-20
Watermarking
Privacy Protection Method
Prompt Injection

Rapid Optimization for Jailbreaking LLMs via Subconscious Exploitation and Echopraxia

Authors: Guangyu Shen, Siyuan Cheng, Kaiyuan Zhang, Guanhong Tao, Shengwei An, Lu Yan, Zhuo Zhang, Shiqing Ma, Xiangyu Zhang | Published: 2024-02-08
LLM Security
LLM Performance Evaluation
Prompt Injection

SALAD-Bench: A Hierarchical and Comprehensive Safety Benchmark for Large Language Models

Authors: Lijun Li, Bowen Dong, Ruohui Wang, Xuhao Hu, Wangmeng Zuo, Dahua Lin, Yu Qiao, Jing Shao | Published: 2024-02-07 | Updated: 2024-06-07
LLM Security
LLM Performance Evaluation
Prompt Injection