These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
The remarkable performance of large language models (LLMs) in generation
tasks has enabled practitioners to leverage publicly available models to power
custom applications, such as chatbots and virtual assistants. However, the data
used to train or fine-tune these LLMs is often undisclosed, allowing an
attacker to compromise the data and inject backdoors into the models. In this
paper, we develop a novel inference time defense, named CLEANGEN, to mitigate
backdoor attacks for generation tasks in LLMs. CLEANGEN is a lightweight and
effective decoding strategy that is compatible with the state-of-the-art (SOTA)
LLMs. Our insight behind CLEANGEN is that compared to other LLMs, backdoored
LLMs assign significantly higher probabilities to tokens representing the
attacker-desired contents. These discrepancies in token probabilities enable
CLEANGEN to identify suspicious tokens favored by the attacker and replace them
with tokens generated by another LLM that is not compromised by the same
attacker, thereby avoiding generation of attacker-desired content. We evaluate
CLEANGEN against five SOTA backdoor attacks. Our results show that CLEANGEN
achieves lower attack success rates (ASR) compared to five SOTA baseline
defenses for all five backdoor attacks. Moreover, LLMs deploying CLEANGEN
maintain helpfulness in their responses when serving benign user queries with
minimal added computational overhead.