Red-Teaming Vision-Language-Action Models via Quality Diversity Prompt Generation for Robust Robot Policies

📅 2026-03-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high sensitivity of current Vision-Language-Action (VLA) models to variations in phrasing of language instructions, which leads to unpredictable failures even under semantically equivalent but linguistically diverse commands. To tackle this issue, the authors propose Q-DIG, a novel approach that introduces Quality-Diversity (QD) optimization into red-teaming for VLA systems. By leveraging vision-language models, Q-DIG generates task-oriented adversarial instructions that are semantically coherent, linguistically diverse, and challenging for the policy. The method systematically uncovers model vulnerabilities in both simulated and real-world environments and significantly enhances policy robustness and task success rates through fine-tuning. User studies confirm that the generated instructions exhibit high fidelity to human-like language, and extensive evaluations across multiple benchmarks demonstrate the effectiveness and generalization capability of Q-DIG.

Technology Category

Application Category

📝 Abstract
Vision-Language-Action (VLA) models have significant potential to enable general-purpose robotic systems for a range of vision-language tasks. However, the performance of VLA-based robots is highly sensitive to the precise wording of language instructions, and it remains difficult to predict when such robots will fail. To improve the robustness of VLAs to different wordings, we present Q-DIG (Quality Diversity for Diverse Instruction Generation), which performs red-teaming by scalably identifying diverse natural language task descriptions that induce failures while remaining task-relevant. Q-DIG integrates Quality Diversity (QD) techniques with Vision-Language Models (VLMs) to generate a broad spectrum of adversarial instructions that expose meaningful vulnerabilities in VLA behavior. Our results across multiple simulation benchmarks show that Q-DIG finds more diverse and meaningful failure modes compared to baseline methods, and that fine-tuning VLAs on the generated instructions improves task success rates. Furthermore, results from a user study highlight that Q-DIG generates prompts judged to be more natural and human-like than those from baselines. Finally, real-world evaluations of Q-DIG prompts show results consistent with simulation, and fine-tuning VLAs on the generated prompts further success rates on unseen instructions. Together, these findings suggest that Q-DIG is a promising approach for identifying vulnerabilities and improving the robustness of VLA-based robots. Our anonymous project website is at qdigvla.github.io.
Problem

Research questions and friction points this paper is trying to address.

Vision-Language-Action Models
Robustness
Instruction Sensitivity
Failure Prediction
Adversarial Instructions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Quality Diversity
Vision-Language-Action Models
Red-Teaming
Adversarial Prompt Generation
Robotic Robustness
🔎 Similar Papers
No similar papers found.
Siddharth Srikanth
Siddharth Srikanth
University of Southern California
RoboticsOpen Ended LearningReinforcement Learning
F
Freddie Liang
Thomas Lord Department of Computer Science, University of Southern California
S
Sophie Hsu
Thomas Lord Department of Computer Science, University of Southern California
Varun Bhatt
Varun Bhatt
PhD Student, University of Southern California
S
Shihan Zhao
Thomas Lord Department of Computer Science, University of Southern California
H
Henry Chen
Thomas Lord Department of Computer Science, University of Southern California
B
Bryon Tjanaka
Thomas Lord Department of Computer Science, University of Southern California
M
Minjune Hwang
Thomas Lord Department of Computer Science, University of Southern California
Akanksha Saran
Akanksha Saran
Sony AI
Reinforcement LearningInteractive Machine Learning
Daniel Seita
Daniel Seita
University of Southern California
RoboticsMachine Learning
Aaquib Tabrez
Aaquib Tabrez
Postdoctoral Associate, Cornell University
Explainable AIHuman-Robot InteractionReinforcement LearningRoboticsAugmented Reality
Stefanos Nikolaidis
Stefanos Nikolaidis
Associate Professor of Computer Science, University of Southern California
roboticsartificial intelligencemachine learning