These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Recent advances in large language models (LLMs) have demonstrated remarkable
potential in the field of natural language processing. Unfortunately, LLMs face
significant security and ethical risks. Although techniques such as safety
alignment are developed for defense, prior researches reveal the possibility of
bypassing such defenses through well-designed jailbreak attacks. In this paper,
we propose QueryAttack, a novel framework to examine the generalizability of
safety alignment. By treating LLMs as knowledge databases, we translate
malicious queries in natural language into structured non-natural query
language to bypass the safety alignment mechanisms of LLMs. We conduct
extensive experiments on mainstream LLMs, and the results show that QueryAttack
not only can achieve high attack success rates (ASRs), but also can jailbreak
various defense methods. Furthermore, we tailor a defense method against
QueryAttack, which can reduce ASR by up to $64\%$ on GPT-4-1106. Our code is
available at https://github.com/horizonsinzqs/QueryAttack.