These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
With the advancement of technology, large language models (LLMs) have
achieved remarkable performance across various natural language processing
(NLP) tasks, powering LLM-integrated applications like Microsoft Copilot.
However, as LLMs continue to evolve, new vulnerabilities, especially prompt
injection attacks arise. These attacks trick LLMs into deviating from the
original input instructions and executing the attacker's instructions injected
in data content, such as retrieved results. Recent attack methods leverage
LLMs' instruction-following abilities and their inabilities to distinguish
instructions injected in the data content, and achieve a high attack success
rate (ASR). When comparing the attack and defense methods, we interestingly
find that they share similar design goals, of inducing the model to ignore
unwanted instructions and instead to execute wanted instructions. Therefore, we
raise an intuitive question: Could these attack techniques be utilized for
defensive purposes? In this paper, we invert the intention of prompt injection
methods to develop novel defense methods based on previous training-free attack
methods, by repeating the attack process but with the original input
instruction rather than the injected instruction. Our comprehensive experiments
demonstrate that our defense techniques outperform existing training-free
defense approaches, achieving state-of-the-art results.