These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Despite their demonstrated valuable capabilities, state-of-the-art (SOTA)
widely deployed large language models (LLMs) still have the potential to cause
harm to society due to the ineffectiveness of their safety filters, which can
be bypassed by prompt transformations called jailbreak attacks. Current
approaches to LLM safety assessment, which employ datasets of templated prompts
and benchmarking pipelines, fail to cover sufficiently large and diverse sets
of jailbreak attacks, leading to the widespread deployment of unsafe LLMs.
Recent research showed that novel jailbreak attacks could be derived by
composition; however, a formal composable representation for jailbreak attacks,
which, among other benefits, could enable the exploration of a large
compositional space of jailbreak attacks through program synthesis methods, has
not been previously proposed. We introduce h4rm3l, a novel approach that
addresses this gap with a human-readable domain-specific language (DSL). Our
framework comprises: (1) The h4rm3l DSL, which formally expresses jailbreak
attacks as compositions of parameterized string transformation primitives. (2)
A synthesizer with bandit algorithms that efficiently generates jailbreak
attacks optimized for a target black box LLM. (3) The h4rm3l red-teaming
software toolkit that employs the previous two components and an automated
harmful LLM behavior classifier that is strongly aligned with human judgment.
We demonstrate h4rm3l's efficacy by synthesizing a dataset of 2656 successful
novel jailbreak attacks targeting 6 SOTA open-source and proprietary LLMs, and
by benchmarking those models against a subset of these synthesized attacks. Our
results show that h4rm3l's synthesized attacks are diverse and more successful
than existing jailbreak attacks in literature, with success rates exceeding 90%
on SOTA LLMs.