Improving the safety and reliability of large language models (LLMs) is a
crucial aspect of realizing trustworthy AI systems. Although alignment methods
aim to suppress harmful content generation, LLMs are often still vulnerable to
jailbreaking attacks that employ adversarial inputs that subvert alignment and
induce harmful outputs. We propose the Randomized Embedding Smoothing and Token
Aggregation (RESTA) defense, which adds random noise to the embedding vectors
and performs aggregation during the generation of each output token, with the
aim of better preserving semantic information. Our experiments demonstrate that
our approach achieves superior robustness versus utility tradeoffs compared to
the baseline defenses.