These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Large Language Models (LLMs) are widely applied in decision making, but their
deployment is threatened by jailbreak attacks, where adversarial users
manipulate model behavior to bypass safety measures. Existing defense
mechanisms, such as safety fine-tuning and model editing, either require
extensive parameter modifications or lack precision, leading to performance
degradation on general tasks, which is unsuitable to post-deployment safety
alignment. To address these challenges, we propose DELMAN (Dynamic Editing for
LLMs JAilbreak DefeNse), a novel approach leveraging direct model editing for
precise, dynamic protection against jailbreak attacks. DELMAN directly updates
a minimal set of relevant parameters to neutralize harmful behaviors while
preserving the model's utility. To avoid triggering a safe response in benign
context, we incorporate KL-divergence regularization to ensure the updated
model remains consistent with the original model when processing benign
queries. Experimental results demonstrate that DELMAN outperforms baseline
methods in mitigating jailbreak attacks while preserving the model's utility,
and adapts seamlessly to new attack instances, providing a practical and
efficient solution for post-deployment model protection.