RECALLED: An Unbounded Resource Consumption Attack on Large Vision-Language Models

AIにより推定されたラベル
Abstract

Resource Consumption Attacks (RCAs) have emerged as a significant threat to the deployment of Large Language Models (LLMs). With the integration of vision modalities, additional attack vectors exacerbate the risk of RCAs in large vision-language models (LVLMs). However, existing red-teaming studies have largely overlooked visual inputs as a potential attack surface, resulting in insufficient mitigation strategies against RCAs in LVLMs. To address this gap, we propose RECALLED (REsource Consumption Attack on Large Vision-LanguagE MoDels), the first approach for exploiting visual modalities to trigger unbounded RCAs red-teaming. First, we present Vision Guided Optimization, a fine-grained pixel-level optimization, to obtain Output Recall adversarial perturbations, which can induce repeating output. Then, we inject the perturbations into visual inputs, triggering unbounded generations to achieve the goal of RCAs. Additionally, we introduce Multi-Objective Parallel Losses to generate universal attack templates and resolve optimization conflicts when intending to implement parallel attacks. Empirical results demonstrate that RECALLED increases service response latency by over 26 , resulting in an additional 20% increase in GPU utilization and memory consumption. Our study exposes security vulnerabilities in LVLMs and establishes a red-teaming framework that can facilitate future defense development against RCAs.

タイトルとURLをコピーしました