These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Numerous studies have investigated methods for jailbreaking Large Language
Models (LLMs) to generate harmful content. Typically, these methods are
evaluated using datasets of malicious prompts designed to bypass security
policies established by LLM providers. However, the generally broad scope and
open-ended nature of existing datasets can complicate the assessment of
jailbreaking effectiveness, particularly in specific domains, notably
cybersecurity. To address this issue, we present and publicly release
CySecBench, a comprehensive dataset containing 12662 prompts specifically
designed to evaluate jailbreaking techniques in the cybersecurity domain. The
dataset is organized into 10 distinct attack-type categories, featuring
close-ended prompts to enable a more consistent and accurate assessment of
jailbreaking attempts. Furthermore, we detail our methodology for dataset
generation and filtration, which can be adapted to create similar datasets in
other domains. To demonstrate the utility of CySecBench, we propose and
evaluate a jailbreaking approach based on prompt obfuscation. Our experimental
results show that this method successfully elicits harmful content from
commercial black-box LLMs, achieving Success Rates (SRs) of 65% with ChatGPT
and 88% with Gemini; in contrast, Claude demonstrated greater resilience with a
jailbreaking SR of 17%. Compared to existing benchmark approaches, our method
shows superior performance, highlighting the value of domain-specific
evaluation datasets for assessing LLM security measures. Moreover, when
evaluated using prompts from a widely used dataset (i.e., AdvBench), it
achieved an SR of 78.5%, higher than the state-of-the-art methods.