These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Large language models (LLMs) integrated with retrieval-augmented generation
(RAG) systems improve accuracy by leveraging external knowledge sources.
However, recent research has revealed RAG's susceptibility to poisoning
attacks, where the attacker injects poisoned texts into the knowledge database,
leading to attacker-desired responses. Existing defenses, which predominantly
focus on inference-time mitigation, have proven insufficient against
sophisticated attacks. In this paper, we introduce RAGForensics, the first
traceback system for RAG, designed to identify poisoned texts within the
knowledge database that are responsible for the attacks. RAGForensics operates
iteratively, first retrieving a subset of texts from the database and then
utilizing a specially crafted prompt to guide an LLM in detecting potential
poisoning texts. Empirical evaluations across multiple datasets demonstrate the
effectiveness of RAGForensics against state-of-the-art poisoning attacks. This
work pioneers the traceback of poisoned texts in RAG systems, providing a
practical and promising defense mechanism to enhance their security.