These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Over the past two years, the use of large language models (LLMs) has advanced
rapidly. While these LLMs offer considerable convenience, they also raise
security concerns, as LLMs are vulnerable to adversarial attacks by some
well-designed textual perturbations. In this paper, we introduce a novel
defense technique named Large LAnguage MOdel Sentinel (LLAMOS), which is
designed to enhance the adversarial robustness of LLMs by purifying the
adversarial textual examples before feeding them into the target LLM. Our
method comprises two main components: a) Agent instruction, which can simulate
a new agent for adversarial defense, altering minimal characters to maintain
the original meaning of the sentence while defending against attacks; b)
Defense guidance, which provides strategies for modifying clean or adversarial
examples to ensure effective defense and accurate outputs from the target LLMs.
Remarkably, the defense agent demonstrates robust defensive capabilities even
without learning from adversarial examples. Additionally, we conduct an
intriguing adversarial experiment where we develop two agents, one for defense
and one for attack, and engage them in mutual confrontation. During the
adversarial interactions, neither agent completely beat the other. Extensive
experiments on both open-source and closed-source LLMs demonstrate that our
method effectively defends against adversarial attacks, thereby enhancing
adversarial robustness.