Adversarial Attacks on Robotic Vision Language Action Models

📅 2025-06-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study is the first to systematically reveal that vision-language-action models (VLAs) inherit adversarial vulnerabilities from large language models (LLMs) in end-to-end robotic control, quantifying their potential to induce physical risks. Method: We propose a novel paradigm adapting LLM jailbreaking attacks to VLAs, introducing a gradient-guided textual adversarial prompt optimization method integrated with reachability analysis of the robot’s closed-loop action space to achieve precise hijacking of real-world action trajectories. Contribution/Results: Experiments demonstrate up to 92% attack success across mainstream VLAs; a single initial adversarial prompt achieves full coverage of the action space and exerts persistent influence for over 15 control steps on average—surpassing conventional semantic constraints. This work provides the first empirical evidence that textual perturbations can directly manipulate robotic physical behavior, establishing a new benchmark for VLA safety evaluation. The code is publicly available.

Technology Category

Application Category

📝 Abstract
The emergence of vision-language-action models (VLAs) for end-to-end control is reshaping the field of robotics by enabling the fusion of multimodal sensory inputs at the billion-parameter scale. The capabilities of VLAs stem primarily from their architectures, which are often based on frontier large language models (LLMs). However, LLMs are known to be susceptible to adversarial misuse, and given the significant physical risks inherent to robotics, questions remain regarding the extent to which VLAs inherit these vulnerabilities. Motivated by these concerns, in this work we initiate the study of adversarial attacks on VLA-controlled robots. Our main algorithmic contribution is the adaptation and application of LLM jailbreaking attacks to obtain complete control authority over VLAs. We find that textual attacks, which are applied once at the beginning of a rollout, facilitate full reachability of the action space of commonly used VLAs and often persist over longer horizons. This differs significantly from LLM jailbreaking literature, as attacks in the real world do not have to be semantically linked to notions of harm. We make all code available at https://github.com/eliotjones1/robogcg .
Problem

Research questions and friction points this paper is trying to address.

Studying adversarial attacks on vision-language-action models (VLAs)
Adapting LLM jailbreaking attacks to control VLA robots
Exploring persistent textual attacks on VLA action space
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adapt LLM jailbreaking attacks for VLAs
Textual attacks enable full action control
Real-world attacks bypass semantic harm links
🔎 Similar Papers
No similar papers found.