These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Retrieval Augmented Generation (RAG) expands the capabilities of modern large
language models (LLMs), by anchoring, adapting, and personalizing their
responses to the most relevant knowledge sources. It is particularly useful in
chatbot applications, allowing developers to customize LLM output without
expensive retraining. Despite their significant utility in various
applications, RAG systems present new security risks. In this work, we propose
a novel attack that allows an adversary to inject a single malicious document
into a RAG system's knowledge base, and mount a backdoor poisoning attack. We
design Phantom, a general two-stage optimization framework against RAG systems,
that crafts a malicious poisoned document leading to an integrity violation in
the model's output. First, the document is constructed to be retrieved only
when a specific naturally occurring trigger sequence of tokens appears in the
victim's queries. Second, the document is further optimized with crafted
adversarial text that induces various adversarial objectives on the LLM output,
including refusal to answer, reputation damage, privacy violations, and harmful
behaviors.We demonstrate our attacks on multiple open-source LLM architectures,
including Gemma, Vicuna, and Llama, and show that they transfer to
closed-source models such as GPT-3.5 Turbo and GPT-4. Finally, we successfully
demonstrate our attack on an end-to-end black-box production RAG system:
NVIDIA's "Chat with RTX''.