These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Warning: This paper contains content that may involve potentially harmful
behaviours, discussed strictly for research purposes.
Jailbreak attacks can hinder the safety of Large Language Model (LLM)
applications, especially chatbots. Studying jailbreak techniques is an
important AI red teaming task for improving the safety of these applications.
In this paper, we introduce TombRaider, a novel jailbreak technique that
exploits the ability to store, retrieve, and use historical knowledge of LLMs.
TombRaider employs two agents, the inspector agent to extract relevant
historical information and the attacker agent to generate adversarial prompts,
enabling effective bypassing of safety filters. We intensively evaluated
TombRaider on six popular models. Experimental results showed that TombRaider
could outperform state-of-the-art jailbreak techniques, achieving nearly 100%
attack success rates (ASRs) on bare models and maintaining over 55.4% ASR
against defence mechanisms. Our findings highlight critical vulnerabilities in
existing LLM safeguards, underscoring the need for more robust safety defences.