These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Backdoor attacks significantly compromise the security of large language
models by triggering them to output specific and controlled content. Currently,
triggers for textual backdoor attacks fall into two categories: fixed-token
triggers and sentence-pattern triggers. However, the former are typically easy
to identify and filter, while the latter, such as syntax and style, do not
apply to all original samples and may lead to semantic shifts. In this paper,
inspired by cross-lingual (CL) prompts of LLMs in real-world scenarios, we
propose a higher-dimensional trigger method at the paragraph level, namely
CL-attack. CL-attack injects the backdoor by using texts with specific
structures that incorporate multiple languages, thereby offering greater
stealthiness and universality compared to existing backdoor attack techniques.
Extensive experiments on different tasks and model architectures demonstrate
that CL-attack can achieve nearly 100% attack success rate with a low poisoning
rate in both classification and generation tasks. We also empirically show that
the CL-attack is more robust against current major defense methods compared to
baseline backdoor attacks. Additionally, to mitigate CL-attack, we further
develop a new defense called TranslateDefense, which can partially mitigate the
impact of CL-attack.