ATAG: AI-Agent Application Threat Assessment with Attack Graphs Authors: Parth Atulbhai Gandhi, Akansha Shukla, David Tayouri, Beni Ifland, Yuval Elovici, Rami Puzis, Asaf Shabtai | Published: 2025-06-03 Indirect Prompt InjectionGraph ConstructionRisk Assessment 2025.06.03 2025.06.05 Literature Database
Attention Knows Whom to Trust: Attention-based Trust Management for LLM Multi-Agent Systems Authors: Pengfei He, Zhenwei Dai, Xianfeng Tang, Yue Xing, Hui Liu, Jingying Zeng, Qiankun Peng, Shrivats Agrawal, Samarth Varshney, Suhang Wang, Jiliang Tang, Qi He | Published: 2025-06-03 Indirect Prompt InjectionModel DoSEthical Considerations 2025.06.03 2025.06.05 Literature Database
Beyond the Protocol: Unveiling Attack Vectors in the Model Context Protocol (MCP) Ecosystem Authors: Hao Song, Yiming Shen, Wenxuan Luo, Leixin Guo, Ting Chen, Jiashui Wang, Beibei Li, Xiaosong Zhang, Jiachi Chen | Published: 2025-05-31 | Updated: 2025-08-20 Indirect Prompt InjectionPrompt InjectionAttack Type 2025.05.31 2025.08.22 Literature Database
IRCopilot: Automated Incident Response with Large Language Models Authors: Xihuan Lin, Jie Zhang, Gelei Deng, Tianzhe Liu, Xiaolong Liu, Changcai Yang, Tianwei Zhang, Qing Guo, Riqing Chen | Published: 2025-05-27 LLM SecurityIndirect Prompt InjectionModel DoS 2025.05.27 2025.05.29 Literature Database
Security Concerns for Large Language Models: A Survey Authors: Miles Q. Li, Benjamin C. M. Fung | Published: 2025-05-24 | Updated: 2025-08-20 Indirect Prompt InjectionPrompt InjectionPsychological Manipulation 2025.05.24 2025.08.22 Literature Database
CTRAP: Embedding Collapse Trap to Safeguard Large Language Models from Harmful Fine-Tuning Authors: Biao Yi, Tiansheng Huang, Baolei Zhang, Tong Li, Lihai Nie, Zheli Liu, Li Shen | Published: 2025-05-22 AlignmentIndirect Prompt InjectionCalculation of Output Harmfulness 2025.05.22 2025.05.28 Literature Database
Can Large Language Models Really Recognize Your Name? Authors: Dzung Pham, Peter Kairouz, Niloofar Mireshghallah, Eugene Bagdasarian, Chau Minh Pham, Amir Houmansadr | Published: 2025-05-20 LLM SecurityIndirect Prompt InjectionPrivacy Leakage 2025.05.20 2025.05.28 Literature Database
The Hidden Dangers of Browsing AI Agents Authors: Mykyta Mudryi, Markiyan Chaklosh, Grzegorz Wójcik | Published: 2025-05-19 LLM SecurityIndirect Prompt InjectionAttack Method 2025.05.19 2025.05.28 Literature Database
From Assistants to Adversaries: Exploring the Security Risks of Mobile LLM Agents Authors: Liangxuan Wu, Chao Wang, Tianming Liu, Yanjie Zhao, Haoyu Wang | Published: 2025-05-19 | Updated: 2025-05-20 LLM SecurityIndirect Prompt InjectionAttack Method 2025.05.19 2025.05.28 Literature Database
Web IP at Risk: Prevent Unauthorized Real-Time Retrieval by Large Language Models Authors: Yisheng Zhong, Yizhu Wen, Junfeng Guo, Mehran Kafai, Heng Huang, Hanqing Guo, Zhuangdi Zhu | Published: 2025-05-19 LLM SecurityIndirect Prompt InjectionPrivacy Management 2025.05.19 2025.05.28 Literature Database