Large Language Model (LLM) agents are autonomous systems powered by LLMs,
capable of reasoning and planning to solve problems by leveraging a set of
tools. However, the integration of multi-tool capabilities in LLM agents
introduces challenges in securely managing tools, ensuring their compatibility,
handling dependency relationships, and protecting control flows within LLM
agent workflows. In this paper, we present the first systematic security
analysis of task control flows in multi-tool-enabled LLM agents. We identify a
novel threat, Cross-Tool Harvesting and Polluting (XTHP), which includes
multiple attack vectors to first hijack the normal control flows of agent
tasks, and then collect and pollute confidential or private information within
LLM agent systems. To understand the impact of this threat, we developed Chord,
a dynamic scanning tool designed to automatically detect real-world agent tools
susceptible to XTHP attacks. Our evaluation of 66 real-world tools from the
repositories of two major LLM agent development frameworks, LangChain and
LlamaIndex, revealed a significant security concern: 75\% are vulnerable to
XTHP attacks, highlighting the prevalence of this threat.