These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Large language models (LLMs) rely on safety alignment to avoid responding to
malicious user inputs. Unfortunately, jailbreak can circumvent safety
guardrails, resulting in LLMs generating harmful content and raising concerns
about LLM safety. Due to language models with intensive parameters often
regarded as black boxes, the mechanisms of alignment and jailbreak are
challenging to elucidate. In this paper, we employ weak classifiers to explain
LLM safety through the intermediate hidden states. We first confirm that LLMs
learn ethical concepts during pre-training rather than alignment and can
identify malicious and normal inputs in the early layers. Alignment actually
associates the early concepts with emotion guesses in the middle layers and
then refines them to the specific reject tokens for safe generations. Jailbreak
disturbs the transformation of early unethical classification into negative
emotions. We conduct experiments on models from 7B to 70B across various model
families to prove our conclusion. Overall, our paper indicates the intrinsical
mechanism of LLM safety and how jailbreaks circumvent safety guardrails,
offering a new perspective on LLM safety and reducing concerns. Our code is
available at https://github.com/ydyjya/LLM-IHS-Explanation.