Labels Predicted by AI
Poisoning attack on RAG Backdoor Attack Mitigation Defense Method
Please note that these labels were automatically added by AI. Therefore, they may not be entirely accurate.
For more details, please see the About the Literature Database page.
Abstract
Despite their growing adoption across domains, large language model (LLM)-powered agents face significant security risks from backdoor attacks during training and fine-tuning. These compromised agents can subsequently be manipulated to execute malicious operations when presented with specific triggers in their inputs or environments. To address this pressing risk, we present ReAgent, a novel defense against a range of backdoor attacks on LLM-based agents. Intuitively, backdoor attacks often result in inconsistencies among the user’s instruction, the agent’s planning, and its execution. Drawing on this insight, ReAgent employs a two-level approach to detect potential backdoors. At the execution level, ReAgent verifies consistency between the agent’s thoughts and actions; at the planning level, ReAgent leverages the agent’s capability to reconstruct the instruction based on its thought trajectory, checking for consistency between the reconstructed instruction and the user’s instruction. Extensive evaluation demonstrates ReAgent’s effectiveness against various backdoor attacks across tasks. For instance, ReAgent reduces the attack success rate by up to 90% in database operation tasks, outperforming existing defenses by large margins. This work reveals the potential of utilizing compromised agents themselves to mitigate backdoor risks.