Large language model-based multi-agent systems (LLM-MAS) effectively
accomplish complex and dynamic tasks through inter-agent communication, but
this reliance introduces substantial safety vulnerabilities. Existing attack
methods targeting LLM-MAS either compromise agent internals or rely on direct
and overt persuasion, which limit their effectiveness, adaptability, and
stealthiness. In this paper, we propose MAST, a Multi-round Adaptive Stealthy
Tampering framework designed to exploit communication vulnerabilities within
the system. MAST integrates Monte Carlo Tree Search with Direct Preference
Optimization to train an attack policy model that adaptively generates
effective multi-round tampering strategies. Furthermore, to preserve
stealthiness, we impose dual semantic and embedding similarity constraints
during the tampering process. Comprehensive experiments across diverse tasks,
communication architectures, and LLMs demonstrate that MAST consistently
achieves high attack success rates while significantly enhancing stealthiness
compared to baselines. These findings highlight the effectiveness,
stealthiness, and adaptability of MAST, underscoring the need for robust
communication safeguards in LLM-MAS.