These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Vision-Large-Language-models(VLMs) have great application prospects in
autonomous driving. Despite the ability of VLMs to comprehend and make
decisions in complex scenarios, their integration into safety-critical
autonomous driving systems poses serious security risks. In this paper, we
propose BadVLMDriver, the first backdoor attack against VLMs for autonomous
driving that can be launched in practice using physical objects. Unlike
existing backdoor attacks against VLMs that rely on digital modifications,
BadVLMDriver uses common physical items, such as a red balloon, to induce
unsafe actions like sudden acceleration, highlighting a significant real-world
threat to autonomous vehicle safety. To execute BadVLMDriver, we develop an
automated pipeline utilizing natural language instructions to generate backdoor
training samples with embedded malicious behaviors. This approach allows for
flexible trigger and behavior selection, enhancing the stealth and practicality
of the attack in diverse scenarios. We conduct extensive experiments to
evaluate BadVLMDriver for two representative VLMs, five different trigger
objects, and two types of malicious backdoor behaviors. BadVLMDriver achieves a
92% attack success rate in inducing a sudden acceleration when coming across a
pedestrian holding a red balloon. Thus, BadVLMDriver not only demonstrates a
critical security risk but also emphasizes the urgent need for developing
robust defense mechanisms to protect against such vulnerabilities in autonomous
driving technologies.
External Datasets
nuScenes
collected datasets
References
Drivegpt4: Interpretable end-to-end autonomous driving via large language model