These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Multimodal large language models (MLLMs) have become the cornerstone of
today's generative AI ecosystem, sparking intense competition among tech giants
and startups. In particular, an MLLM generates a text response given a prompt
consisting of an image and a question. While state-of-the-art MLLMs use safety
filters and alignment techniques to refuse unsafe prompts, in this work, we
introduce MLLM-Refusal, the first method that induces refusals for safe
prompts. In particular, our MLLM-Refusal optimizes a nearly-imperceptible
refusal perturbation and adds it to an image, causing target MLLMs to likely
refuse a safe prompt containing the perturbed image and a safe question.
Specifically, we formulate MLLM-Refusal as a constrained optimization problem
and propose an algorithm to solve it. Our method offers competitive advantages
for MLLM model providers by potentially disrupting user experiences of
competing MLLMs, since competing MLLM's users will receive unexpected refusals
when they unwittingly use these perturbed images in their prompts. We evaluate
MLLM-Refusal on four MLLMs across four datasets, demonstrating its
effectiveness in causing competing MLLMs to refuse safe prompts while not
affecting non-competing MLLMs. Furthermore, we explore three potential
countermeasures-adding Gaussian noise, DiffPure, and adversarial training. Our
results show that though they can mitigate MLLM-Refusal's effectiveness, they
also sacrifice the accuracy and/or efficiency of the competing MLLM. The code
is available at https://github.com/Sadcardation/MLLM-Refusal.