Pandora: Jailbreak GPTs by Retrieval Augmented Generation Poisoning Authors: Gelei Deng, Yi Liu, Kailong Wang, Yuekang Li, Tianwei Zhang, Yang Liu | Published: 2024-02-13 LLM SecurityPrompt InjectionMalicious Content Generation 2024.02.13 2025.05.27 Literature Database
PoisonedRAG: Knowledge Corruption Attacks to Retrieval-Augmented Generation of Large Language Models Authors: Wei Zou, Runpeng Geng, Binghui Wang, Jinyuan Jia | Published: 2024-02-12 | Updated: 2024-08-13 Prompt InjectionPoisoningPoisoning Attack 2024.02.12 2025.05.27 Literature Database
EmojiPrompt: Generative Prompt Obfuscation for Privacy-Preserving Communication with Cloud-based LLMs Authors: Sam Lin, Wenyue Hua, Zhenting Wang, Mingyu Jin, Lizhou Fan, Yongfeng Zhang | Published: 2024-02-08 | Updated: 2025-03-20 WatermarkingPrivacy Protection MethodPrompt Injection 2024.02.08 2025.05.27 Literature Database
Rapid Optimization for Jailbreaking LLMs via Subconscious Exploitation and Echopraxia Authors: Guangyu Shen, Siyuan Cheng, Kaiyuan Zhang, Guanhong Tao, Shengwei An, Lu Yan, Zhuo Zhang, Shiqing Ma, Xiangyu Zhang | Published: 2024-02-08 LLM SecurityLLM Performance EvaluationPrompt Injection 2024.02.08 2025.05.27 Literature Database
SALAD-Bench: A Hierarchical and Comprehensive Safety Benchmark for Large Language Models Authors: Lijun Li, Bowen Dong, Ruohui Wang, Xuhao Hu, Wangmeng Zuo, Dahua Lin, Yu Qiao, Jing Shao | Published: 2024-02-07 | Updated: 2024-06-07 LLM SecurityLLM Performance EvaluationPrompt Injection 2024.02.07 2025.05.27 Literature Database
LLM Agents can Autonomously Hack Websites Authors: Richard Fang, Rohan Bindu, Akul Gupta, Qiusi Zhan, Daniel Kang | Published: 2024-02-06 | Updated: 2024-02-16 Website VulnerabilityCyber AttackPrompt Injection 2024.02.06 2025.05.27 Literature Database
Detecting Scams Using Large Language Models Authors: Liming Jiang | Published: 2024-02-05 LLM SecurityPhishing DetectionPrompt Injection 2024.02.05 2025.05.27 Literature Database
Reconstruct Your Previous Conversations! Comprehensively Investigating Privacy Leakage Risks in Conversations with GPT Models Authors: Junjie Chu, Zeyang Sha, Michael Backes, Yang Zhang | Published: 2024-02-05 | Updated: 2024-10-07 Privacy ProtectionPrompt InjectionMalicious Prompt 2024.02.05 2025.05.27 Literature Database
Adversarial Text Purification: A Large Language Model Approach for Defense Authors: Raha Moraffah, Shubh Khandelwal, Amrita Bhattacharjee, Huan Liu | Published: 2024-02-05 Text Generation MethodPrompt InjectionAdversarial Text Purification 2024.02.05 2025.05.27 Literature Database
Jailbreaking Attack against Multimodal Large Language Model Authors: Zhenxing Niu, Haodong Ren, Xinbo Gao, Gang Hua, Rong Jin | Published: 2024-02-04 Prompt InjectionMalicious Content GenerationInformation Gathering Methods 2024.02.04 2025.05.27 Literature Database