🤖 AI Summary
This work addresses the high sensitivity of current Vision-Language-Action (VLA) models to variations in phrasing of language instructions, which leads to unpredictable failures even under semantically equivalent but linguistically diverse commands. To tackle this issue, the authors propose Q-DIG, a novel approach that introduces Quality-Diversity (QD) optimization into red-teaming for VLA systems. By leveraging vision-language models, Q-DIG generates task-oriented adversarial instructions that are semantically coherent, linguistically diverse, and challenging for the policy. The method systematically uncovers model vulnerabilities in both simulated and real-world environments and significantly enhances policy robustness and task success rates through fine-tuning. User studies confirm that the generated instructions exhibit high fidelity to human-like language, and extensive evaluations across multiple benchmarks demonstrate the effectiveness and generalization capability of Q-DIG.
📝 Abstract
Vision-Language-Action (VLA) models have significant potential to enable general-purpose robotic systems for a range of vision-language tasks. However, the performance of VLA-based robots is highly sensitive to the precise wording of language instructions, and it remains difficult to predict when such robots will fail. To improve the robustness of VLAs to different wordings, we present Q-DIG (Quality Diversity for Diverse Instruction Generation), which performs red-teaming by scalably identifying diverse natural language task descriptions that induce failures while remaining task-relevant. Q-DIG integrates Quality Diversity (QD) techniques with Vision-Language Models (VLMs) to generate a broad spectrum of adversarial instructions that expose meaningful vulnerabilities in VLA behavior. Our results across multiple simulation benchmarks show that Q-DIG finds more diverse and meaningful failure modes compared to baseline methods, and that fine-tuning VLAs on the generated instructions improves task success rates. Furthermore, results from a user study highlight that Q-DIG generates prompts judged to be more natural and human-like than those from baselines. Finally, real-world evaluations of Q-DIG prompts show results consistent with simulation, and fine-tuning VLAs on the generated prompts further success rates on unseen instructions. Together, these findings suggest that Q-DIG is a promising approach for identifying vulnerabilities and improving the robustness of VLA-based robots. Our anonymous project website is at qdigvla.github.io.