These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
While (multimodal) large language models (LLMs) have attracted widespread
attention due to their exceptional capabilities, they remain vulnerable to
jailbreak attacks. Various defense methods are proposed to defend against
jailbreak attacks, however, they are often tailored to specific types of
jailbreak attacks, limiting their effectiveness against diverse adversarial
strategies. For instance, rephrasing-based defenses are effective against text
adversarial jailbreaks but fail to counteract image-based attacks. To overcome
these limitations, we propose a universal defense framework, termed Test-time
IMmunization (TIM), which can adaptively defend against various jailbreak
attacks in a self-evolving way. Specifically, TIM initially trains a gist token
for efficient detection, which it subsequently applies to detect jailbreak
activities during inference. When jailbreak attempts are identified, TIM
implements safety fine-tuning using the detected jailbreak instructions paired
with refusal answers. Furthermore, to mitigate potential performance
degradation in the detector caused by parameter updates during safety
fine-tuning, we decouple the fine-tuning process from the detection module.
Extensive experiments on both LLMs and multimodal LLMs demonstrate the efficacy
of TIM.