We propose a theoretical framework demonstrating that popular Large Language
Model (LLM) alignment methods, including Reinforcement Learning from Human
Feedback (RLHF) and alternatives, fundamentally function as divergence
estimators between aligned (preferred or safe) and unaligned (less-preferred or
harmful) distributions. This explains the separation phenomenon between safe
and harmful prompts in the model hidden representation after alignment.
Inspired by the theoretical results, we identify that some alignment methods
are better than others in terms of separation and, introduce a new method,
KLDO, and further demonstrate the implication of our theories. We advocate for
compliance-refusal datasets over preference datasets to enhance safety
alignment, supported by both theoretical reasoning and empirical evidence.
Additionally, to quantify safety separation, we leverage a distance metric in
the representation space and statistically validate its efficacy as a
statistical significant indicator of LLM resilience against jailbreak attacks.