These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
In recent years, Large Language Models (LLMs) have gained widespread use,
raising concerns about their security. Traditional jailbreak attacks, which
often rely on the model internal information or have limitations when exploring
the unsafe behavior of the victim model, limiting their reducing their general
applicability. In this paper, we introduce PathSeeker, a novel black-box
jailbreak method, which is inspired by the game of rats escaping a maze. We
think that each LLM has its unique "security maze", and attackers attempt to
find the exit learning from the received feedback and their accumulated
experience to compromise the target LLM's security defences. Our approach
leverages multi-agent reinforcement learning, where smaller models collaborate
to guide the main LLM in performing mutation operations to achieve the attack
objectives. By progressively modifying inputs based on the model's feedback,
our system induces richer, harmful responses. During our manual attempts to
perform jailbreak attacks, we found that the vocabulary of the response of the
target model gradually became richer and eventually produced harmful responses.
Based on the observation, we also introduce a reward mechanism that exploits
the expansion of vocabulary richness in LLM responses to weaken security
constraints. Our method outperforms five state-of-the-art attack techniques
when tested across 13 commercial and open-source LLMs, achieving high attack
success rates, especially in strongly aligned commercial models like
GPT-4o-mini, Claude-3.5, and GLM-4-air with strong safety alignment. This study
aims to improve the understanding of LLM security vulnerabilities and we hope
that this sturdy can contribute to the development of more robust defenses.