The rapid advancement of multimodal large language models (MLLMs) has led to
breakthroughs in various applications, yet their security remains a critical
challenge. One pressing issue involves unsafe image-query pairs--jailbreak
inputs specifically designed to bypass security constraints and elicit
unintended responses from MLLMs. Compared to general multimodal data, such
unsafe inputs are relatively sparse, which limits the diversity and richness of
training samples available for developing robust defense models. Meanwhile,
existing guardrail-type methods rely on external modules to enforce security
constraints but fail to address intrinsic vulnerabilities within MLLMs.
Traditional supervised fine-tuning (SFT), on the other hand, often over-refuses
harmless inputs, compromising general performance. Given these challenges, we
propose Secure Tug-of-War (SecTOW), an innovative iterative defense-attack
training method to enhance the security of MLLMs. SecTOW consists of two
modules: a defender and an auxiliary attacker, both trained iteratively using
reinforcement learning (GRPO). During the iterative process, the attacker
identifies security vulnerabilities in the defense model and expands jailbreak
data. The expanded data are then used to train the defender, enabling it to
address identified security vulnerabilities. We also design reward mechanisms
used for GRPO to simplify the use of response labels, reducing dependence on
complex generative labels and enabling the efficient use of synthetic data.
Additionally, a quality monitoring mechanism is used to mitigate the defender's
over-refusal of harmless inputs and ensure the diversity of the jailbreak data
generated by the attacker. Experimental results on safety-specific and general
benchmarks demonstrate that SecTOW significantly improves security while
preserving general performance.