These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Large Language Model-based Multi-Agent Systems (LLM-MAS) have revolutionized
complex problem-solving capability by enabling sophisticated agent
collaboration through message-based communications. While the communication
framework is crucial for agent coordination, it also introduces a critical yet
unexplored security vulnerability. In this work, we introduce
Agent-in-the-Middle (AiTM), a novel attack that exploits the fundamental
communication mechanisms in LLM-MAS by intercepting and manipulating
inter-agent messages. Unlike existing attacks that compromise individual
agents, AiTM demonstrates how an adversary can compromise entire multi-agent
systems by only manipulating the messages passing between agents. To enable the
attack under the challenges of limited control and role-restricted
communication format, we develop an LLM-powered adversarial agent with a
reflection mechanism that generates contextually-aware malicious instructions.
Our comprehensive evaluation across various frameworks, communication
structures, and real-world applications demonstrates that LLM-MAS is vulnerable
to communication-based attacks, highlighting the need for robust security
measures in multi-agent systems.