These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Ensuring safety alignment has become a critical requirement for large
language models (LLMs), particularly given their widespread deployment in
real-world applications. However, LLMs remain susceptible to jailbreak attacks,
which exploit system vulnerabilities to bypass safety measures and generate
harmful outputs. Although numerous defense mechanisms based on adversarial
training have been proposed, a persistent challenge lies in the exacerbation of
over-refusal behaviors, which compromise the overall utility of the model. To
address these challenges, we propose a Latent-space Adversarial Training with
Post-aware Calibration (LATPC) framework. During the adversarial training
phase, LATPC compares harmful and harmless instructions in the latent space and
extracts safety-critical dimensions to construct refusal features attack,
precisely simulating agnostic jailbreak attack types requiring adversarial
mitigation. At the inference stage, an embedding-level calibration mechanism is
employed to alleviate over-refusal behaviors with minimal computational
overhead. Experimental results demonstrate that, compared to various defense
methods across five types of jailbreak attacks, LATPC framework achieves a
superior balance between safety and utility. Moreover, our analysis underscores
the effectiveness of extracting safety-critical dimensions from the latent
space for constructing robust refusal feature attacks.