These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Multimodal Large Language Models (MLLMs) have achieved impressive performance
and have been put into practical use in commercial applications, but they still
have potential safety mechanism vulnerabilities. Jailbreak attacks are red
teaming methods that aim to bypass safety mechanisms and discover MLLMs'
potential risks. Existing MLLMs' jailbreak methods often bypass the model's
safety mechanism through complex optimization methods or carefully designed
image and text prompts. Despite achieving some progress, they have a low attack
success rate on commercial closed-source MLLMs. Unlike previous research, we
empirically find that there exists a Shuffle Inconsistency between MLLMs'
comprehension ability and safety ability for the shuffled harmful instruction.
That is, from the perspective of comprehension ability, MLLMs can understand
the shuffled harmful text-image instructions well. However, they can be easily
bypassed by the shuffled harmful instructions from the perspective of safety
ability, leading to harmful responses. Then we innovatively propose a
text-image jailbreak attack named SI-Attack. Specifically, to fully utilize the
Shuffle Inconsistency and overcome the shuffle randomness, we apply a
query-based black-box optimization method to select the most harmful shuffled
inputs based on the feedback of the toxic judge model. A series of experiments
show that SI-Attack can improve the attack's performance on three benchmarks.
In particular, SI-Attack can obviously improve the attack success rate for
commercial MLLMs such as GPT-4o or Claude-3.5-Sonnet.