These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
As Large Language Models (LLMs) of Prompt Jailbreaking are getting more and
more attention, it is of great significance to raise a generalized research
paradigm to evaluate attack strengths and a basic model to conduct subtler
experiments. In this paper, we propose a novel approach by focusing on a set of
target questions that are inherently more sensitive to jailbreak prompts,
aiming to circumvent the limitations posed by enhanced LLM security. Through
designing and analyzing these sensitive questions, this paper reveals a more
effective method of identifying vulnerabilities in LLMs, thereby contributing
to the advancement of LLM security. This research not only challenges existing
jailbreaking methodologies but also fortifies LLMs against potential exploits.