AbuseGPT: Abuse of Generative AI ChatBots to Create Smishing Campaigns Authors: Ashfak Md Shibli, Mir Mehedi A. Pritom, Maanak Gupta | Published: 2024-02-15 Abuse of AI ChatbotsCyber AttackPrompt Injection 2024.02.15 2025.05.27 Literature Database
PAL: Proxy-Guided Black-Box Attack on Large Language Models Authors: Chawin Sitawarin, Norman Mu, David Wagner, Alexandre Araujo | Published: 2024-02-15 LLM SecurityPrompt InjectionAttack Method 2024.02.15 2025.05.27 Literature Database
Copyright Traps for Large Language Models Authors: Matthieu Meeus, Igor Shilov, Manuel Faysse, Yves-Alexandre de Montjoye | Published: 2024-02-14 | Updated: 2024-06-04 Trap Sequence GenerationPrompt InjectionCopyright Trap 2024.02.14 2025.05.27 Literature Database
Agent Smith: A Single Image Can Jailbreak One Million Multimodal LLM Agents Exponentially Fast Authors: Xiangming Gu, Xiaosen Zheng, Tianyu Pang, Chao Du, Qian Liu, Ye Wang, Jing Jiang, Min Lin | Published: 2024-02-13 | Updated: 2024-06-03 LLM SecurityPrompt InjectionAdversarial Attack Detection 2024.02.13 2025.05.27 Literature Database
Pandora: Jailbreak GPTs by Retrieval Augmented Generation Poisoning Authors: Gelei Deng, Yi Liu, Kailong Wang, Yuekang Li, Tianwei Zhang, Yang Liu | Published: 2024-02-13 LLM SecurityPrompt InjectionMalicious Content Generation 2024.02.13 2025.05.27 Literature Database
PoisonedRAG: Knowledge Corruption Attacks to Retrieval-Augmented Generation of Large Language Models Authors: Wei Zou, Runpeng Geng, Binghui Wang, Jinyuan Jia | Published: 2024-02-12 | Updated: 2024-08-13 Prompt InjectionPoisoningPoisoning Attack 2024.02.12 2025.05.27 Literature Database
Whispers in the Machine: Confidentiality in Agentic Systems Authors: Jonathan Evertz, Merlin Chlosta, Lea Schönherr, Thorsten Eisenhofer | Published: 2024-02-10 | Updated: 2025-08-12 Security AssurancePrompt Injection攻撃戦略分析 2024.02.10 2025.08.14 Literature Database
EmojiPrompt: Generative Prompt Obfuscation for Privacy-Preserving Communication with Cloud-based LLMs Authors: Sam Lin, Wenyue Hua, Zhenting Wang, Mingyu Jin, Lizhou Fan, Yongfeng Zhang | Published: 2024-02-08 | Updated: 2025-03-20 WatermarkingPrivacy Protection MethodPrompt Injection 2024.02.08 2025.05.27 Literature Database
Rapid Optimization for Jailbreaking LLMs via Subconscious Exploitation and Echopraxia Authors: Guangyu Shen, Siyuan Cheng, Kaiyuan Zhang, Guanhong Tao, Shengwei An, Lu Yan, Zhuo Zhang, Shiqing Ma, Xiangyu Zhang | Published: 2024-02-08 LLM SecurityLLM Performance EvaluationPrompt Injection 2024.02.08 2025.05.27 Literature Database
SALAD-Bench: A Hierarchical and Comprehensive Safety Benchmark for Large Language Models Authors: Lijun Li, Bowen Dong, Ruohui Wang, Xuhao Hu, Wangmeng Zuo, Dahua Lin, Yu Qiao, Jing Shao | Published: 2024-02-07 | Updated: 2024-06-07 LLM SecurityLLM Performance EvaluationPrompt Injection 2024.02.07 2025.05.27 Literature Database