These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
The strong planning and reasoning capabilities of Large Language Models
(LLMs) have fostered the development of agent-based systems capable of
leveraging external tools and interacting with increasingly complex
environments. However, these powerful features also introduce a critical
security risk: indirect prompt injection, a sophisticated attack vector that
compromises the core of these agents, the LLM, by manipulating contextual
information rather than direct user prompts. In this work, we propose a generic
black-box fuzzing framework, AgentXploit, designed to automatically discover
and exploit indirect prompt injection vulnerabilities across diverse LLM
agents. Our approach starts by constructing a high-quality initial seed corpus,
then employs a seed selection algorithm based on Monte Carlo Tree Search (MCTS)
to iteratively refine inputs, thereby maximizing the likelihood of uncovering
agent weaknesses. We evaluate AgentXploit on two public benchmarks, AgentDojo
and VWA-adv, where it achieves 71% and 70% success rates against agents based
on o3-mini and GPT-4o, respectively, nearly doubling the performance of
baseline attacks. Moreover, AgentXploit exhibits strong transferability across
unseen tasks and internal LLMs, as well as promising results against defenses.
Beyond benchmark evaluations, we apply our attacks in real-world environments,
successfully misleading agents to navigate to arbitrary URLs, including
malicious sites.