These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
With the advent and widespread deployment of Multimodal Large Language Models
(MLLMs), ensuring their safety has become increasingly critical. To achieve
this objective, it requires us to proactively discover the vulnerability of
MLLMs by exploring the attack methods. Thus, structure-based jailbreak attacks,
where harmful semantic content is embedded within images, have been proposed to
mislead the models. However, previous structure-based jailbreak methods mainly
focus on transforming the format of malicious queries, such as converting
harmful content into images through typography, which lacks sufficient
jailbreak effectiveness and generalizability. To address these limitations, we
first introduce the concept of "Role-play" into MLLM jailbreak attacks and
propose a novel and effective method called Visual Role-play (VRP).
Specifically, VRP leverages Large Language Models to generate detailed
descriptions of high-risk characters and create corresponding images based on
the descriptions. When paired with benign role-play instruction texts, these
high-risk character images effectively mislead MLLMs into generating malicious
responses by enacting characters with negative attributes. We further extend
our VRP method into a universal setup to demonstrate its generalizability.
Extensive experiments on popular benchmarks show that VRP outperforms the
strongest baseline, Query relevant and FigStep, by an average Attack Success
Rate (ASR) margin of 14.3% across all models.