Amplified Vulnerabilities: Structured Jailbreak Attacks on LLM-based Multi-Agent Debate Authors: Senmao Qi, Yifei Zou, Peng Li, Ziyi Lin, Xiuzhen Cheng, Dongxiao Yu | Published: 2025-04-23 Indirect Prompt InjectionMulti-Round DialogueLarge Language Model 2025.04.23 2025.05.27 Literature Database
Give LLMs a Security Course: Securing Retrieval-Augmented Code Generation via Knowledge Injection Authors: Bo Lin, Shangwen Wang, Yihao Qin, Liqian Chen, Xiaoguang Mao | Published: 2025-04-23 Poisoning attack on RAGIndirect Prompt InjectionSecurity of Code Generation 2025.04.23 2025.05.27 Literature Database
Exploring the Role of Large Language Models in Cybersecurity: A Systematic Survey Authors: Shuang Tian, Tao Zhang, Jiqiang Liu, Jiacheng Wang, Xuangou Wu, Xiaoqiang Zhu, Ruichen Zhang, Weiting Zhang, Zhenhui Yuan, Shiwen Mao, Dong In Kim | Published: 2025-04-22 | Updated: 2025-04-28 Indirect Prompt InjectionPrompt InjectionLarge Language Model 2025.04.22 2025.05.27 Literature Database
Progent: Programmable Privilege Control for LLM Agents Authors: Tianneng Shi, Jingxuan He, Zhun Wang, Linyu Wu, Hongwei Li, Wenbo Guo, Dawn Song | Published: 2025-04-16 LLM Performance EvaluationIndirect Prompt InjectionPrivacy Protection Mechanism 2025.04.16 2025.05.27 Literature Database
The Obvious Invisible Threat: LLM-Powered GUI Agents’ Vulnerability to Fine-Print Injections Authors: Chaoran Chen, Zhiping Zhang, Bingcan Guo, Shang Ma, Ibrahim Khalilov, Simret A Gebreegziabher, Yanfang Ye, Ziang Xiao, Yaxing Yao, Tianshi Li, Toby Jia-Jun Li | Published: 2025-04-15 Indirect Prompt InjectionPrivacy Protection MechanismUser Behavior Analysis 2025.04.15 2025.05.27 Literature Database
StruPhantom: Evolutionary Injection Attacks on Black-Box Tabular Agents Powered by Large Language Models Authors: Yang Feng, Xudong Pan | Published: 2025-04-14 LLM Performance EvaluationIndirect Prompt InjectionMalicious Website Detection 2025.04.14 2025.05.27 Literature Database
ControlNET: A Firewall for RAG-based LLM System Authors: Hongwei Yao, Haoran Shi, Yidou Chen, Yixin Jiang, Cong Wang, Zhan Qin | Published: 2025-04-13 | Updated: 2025-04-17 Poisoning attack on RAGIndirect Prompt InjectionData Breach Risk 2025.04.13 2025.05.27 Literature Database
CheatAgent: Attacking LLM-Empowered Recommender Systems via LLM Agent Authors: Liang-bo Ning, Shijie Wang, Wenqi Fan, Qing Li, Xin Xu, Hao Chen, Feiran Huang | Published: 2025-04-13 | Updated: 2025-04-24 Indirect Prompt InjectionPrompt InjectionAttacker Behavior Analysis 2025.04.13 2025.05.27 Literature Database
Separator Injection Attack: Uncovering Dialogue Biases in Large Language Models Caused by Role Separators Authors: Xitao Li, Haijun Wang, Jiang Wu, Ting Liu | Published: 2025-04-08 Indirect Prompt InjectionPrompting StrategyModel Performance Evaluation 2025.04.08 2025.05.27 Literature Database
Pr$εε$mpt: Sanitizing Sensitive Prompts for LLMs Authors: Amrita Roy Chowdhury, David Glukhov, Divyam Anshumaan, Prasad Chalasani, Nicolas Papernot, Somesh Jha, Mihir Bellare | Published: 2025-04-07 RAGIndirect Prompt InjectionPrivacy Analysis 2025.04.07 2025.05.27 Literature Database