🤖 AI Summary
This work exposes systematic robustness deficiencies of vision-language-action (VLA) models in realistic settings: high benchmark success rates mask severe vulnerability to real-world perturbations. To address this, the authors introduce the first seven-dimensional controllable perturbation evaluation framework—systematically varying object layout, camera viewpoint, robot initial state, language instruction, illumination, background texture, and sensor noise—and conduct comprehensive stress-testing across leading VLA models. Results reveal catastrophic performance drops—from 95% to under 30%—under minor perturbations; critically, models consistently ignore variations in language instructions, indicating reliance on superficial statistical shortcuts rather than genuine semantic grounding. This challenges the prevailing “high score = high intelligence” evaluation paradigm. The study proposes a multi-dimensional robustness evaluation standard, providing both methodological foundations and empirical evidence essential for developing reliable, generalizable embodied AI systems.
📝 Abstract
Visual-Language-Action (VLA) models report impressive success rates on robotic manipulation benchmarks, yet these results may mask fundamental weaknesses in robustness. We perform a systematic vulnerability analysis by introducing controlled perturbations across seven dimensions: objects layout, camera viewpoints, robot initial states, language instructions, light conditions, background textures and sensor noise. We comprehensively analyzed multiple state-of-the-art models and revealed consistent brittleness beneath apparent competence. Our analysis exposes critical weaknesses: models exhibit extreme sensitivity to perturbation factors, including camera viewpoints and robot initial states, with performance dropping from 95% to below 30% under modest perturbations. Surprisingly, models are largely insensitive to language variations, with further experiments revealing that models tend to ignore language instructions completely. Our findings challenge the assumption that high benchmark scores equate to true competency and highlight the need for evaluation practices that assess reliability under realistic variation.