LIBERO-Plus: In-depth Robustness Analysis of Vision-Language-Action Models

📅 2025-10-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work exposes systematic robustness deficiencies of vision-language-action (VLA) models in realistic settings: high benchmark success rates mask severe vulnerability to real-world perturbations. To address this, the authors introduce the first seven-dimensional controllable perturbation evaluation framework—systematically varying object layout, camera viewpoint, robot initial state, language instruction, illumination, background texture, and sensor noise—and conduct comprehensive stress-testing across leading VLA models. Results reveal catastrophic performance drops—from 95% to under 30%—under minor perturbations; critically, models consistently ignore variations in language instructions, indicating reliance on superficial statistical shortcuts rather than genuine semantic grounding. This challenges the prevailing “high score = high intelligence” evaluation paradigm. The study proposes a multi-dimensional robustness evaluation standard, providing both methodological foundations and empirical evidence essential for developing reliable, generalizable embodied AI systems.

Technology Category

Application Category

📝 Abstract
Visual-Language-Action (VLA) models report impressive success rates on robotic manipulation benchmarks, yet these results may mask fundamental weaknesses in robustness. We perform a systematic vulnerability analysis by introducing controlled perturbations across seven dimensions: objects layout, camera viewpoints, robot initial states, language instructions, light conditions, background textures and sensor noise. We comprehensively analyzed multiple state-of-the-art models and revealed consistent brittleness beneath apparent competence. Our analysis exposes critical weaknesses: models exhibit extreme sensitivity to perturbation factors, including camera viewpoints and robot initial states, with performance dropping from 95% to below 30% under modest perturbations. Surprisingly, models are largely insensitive to language variations, with further experiments revealing that models tend to ignore language instructions completely. Our findings challenge the assumption that high benchmark scores equate to true competency and highlight the need for evaluation practices that assess reliability under realistic variation.
Problem

Research questions and friction points this paper is trying to address.

Analyze VLA model robustness across seven perturbation dimensions
Reveal extreme sensitivity to camera viewpoints and robot states
Discover models ignore language instructions despite apparent competence
Innovation

Methods, ideas, or system contributions that make the work stand out.

Systematic vulnerability analysis across seven perturbation dimensions
Revealed model brittleness under camera and initial state variations
Discovered language instruction ignorance despite benchmark success
🔎 Similar Papers
No similar papers found.
S
Senyu Fei
Tongji University
S
Siyin Wang
Fudan University
J
Junhao Shi
Fudan University
Z
Zihao Dai
Fudan University
J
Jikun Cai
Fudan University
P
Pengfang Qian
Fudan University
L
Li Ji
Fudan University
X
Xinzhe He
Fudan University
Shiduo Zhang
Shiduo Zhang
Fudan University
Embodied AIFoundation Models
Zhaoye Fei
Zhaoye Fei
Fudan University
Natural Language Processing
Jinlan Fu
Jinlan Fu
National University of Singapore
Natural Language ProcessingVision and LanguageLarge Language Model
Jingjing Gong
Jingjing Gong
SII
Machine LearningAI for ScienceLarge Language ModelEmbodied AI
X
Xipeng Qiu
Fudan University