Indirect Prompt Injection

IRCopilot: Automated Incident Response with Large Language Models

Authors: Xihuan Lin, Jie Zhang, Gelei Deng, Tianzhe Liu, Xiaolong Liu, Changcai Yang, Tianwei Zhang, Qing Guo, Riqing Chen | Published: 2025-05-27
LLM Security
Indirect Prompt Injection
Model DoS

Security Concerns for Large Language Models: A Survey

Authors: Miles Q. Li, Benjamin C. M. Fung | Published: 2025-05-24 | Updated: 2025-08-20
Indirect Prompt Injection
Prompt Injection
Psychological Manipulation

CTRAP: Embedding Collapse Trap to Safeguard Large Language Models from Harmful Fine-Tuning

Authors: Biao Yi, Tiansheng Huang, Baolei Zhang, Tong Li, Lihai Nie, Zheli Liu, Li Shen | Published: 2025-05-22
Alignment
Indirect Prompt Injection
Calculation of Output Harmfulness

Can Large Language Models Really Recognize Your Name?

Authors: Dzung Pham, Peter Kairouz, Niloofar Mireshghallah, Eugene Bagdasarian, Chau Minh Pham, Amir Houmansadr | Published: 2025-05-20
LLM Security
Indirect Prompt Injection
Privacy Leakage

The Hidden Dangers of Browsing AI Agents

Authors: Mykyta Mudryi, Markiyan Chaklosh, Grzegorz Wójcik | Published: 2025-05-19
LLM Security
Indirect Prompt Injection
Attack Method

From Assistants to Adversaries: Exploring the Security Risks of Mobile LLM Agents

Authors: Liangxuan Wu, Chao Wang, Tianming Liu, Yanjie Zhao, Haoyu Wang | Published: 2025-05-19 | Updated: 2025-05-20
LLM Security
Indirect Prompt Injection
Attack Method

Web IP at Risk: Prevent Unauthorized Real-Time Retrieval by Large Language Models

Authors: Yisheng Zhong, Yizhu Wen, Junfeng Guo, Mehran Kafai, Heng Huang, Hanqing Guo, Zhuangdi Zhu | Published: 2025-05-19
LLM Security
Indirect Prompt Injection
Privacy Management

IP Leakage Attacks Targeting LLM-Based Multi-Agent Systems

Authors: Liwen Wang, Wenxuan Wang, Shuai Wang, Zongjie Li, Zhenlan Ji, Zongyi Lyu, Daoyuan Wu, Shing-Chi Cheung | Published: 2025-05-18 | Updated: 2025-05-20
Indirect Prompt Injection
Privacy Leakage
情報伝播手法

AutoPentest: Enhancing Vulnerability Management With Autonomous LLM Agents

Authors: Julius Henke | Published: 2025-05-15
LLM Security
RAG
Indirect Prompt Injection

LibVulnWatch: A Deep Assessment Agent System and Leaderboard for Uncovering Hidden Vulnerabilities in Open-Source AI Libraries

Authors: Zekun Wu, Seonglae Cho, Umar Mohammed, Cristian Munoz, Kleyton Costa, Xin Guan, Theo King, Ze Wang, Emre Kazim, Adriano Koshiyama | Published: 2025-05-13 | Updated: 2025-06-30
Indirect Prompt Injection
Risk Assessment Method
依存関係管理