đ¤ AI Summary
Existing VLA evaluation benchmarks, notably LIBERO, suffer from flawed train-evaluation protocols that inadvertently encourage models to rely on memorized associations between action sequences and fixed environment layoutsârather than genuine task understanding or generalizationâleading to inflated performance estimates. To address this, we propose LIBERO-PRO: the first multi-dimensional perturbation benchmark for visual-language-action (VLA) models. It systematically introduces controlled perturbations across four dimensionsâobjects, states, instructions, and environmentsâincluding object substitution, instruction corruption, and environment reconfigurationâto rigorously assess robustness and zero-shot generalization. Experiments reveal a dramatic performance collapse: state-of-the-art VLA models achieve >90% accuracy on standard LIBERO but drop to 0.0% on LIBERO-PRO, exposing their reliance on spurious memorization rather than compositional reasoning. This work establishes a more rigorous, fair, and challenging evaluation paradigm for VLA models.
đ Abstract
LIBERO has emerged as a widely adopted benchmark for evaluating Vision-Language-Action (VLA) models; however, its current training and evaluation settings are problematic, often leading to inflated performance estimates and preventing fair model comparison. To address these issues, we introduce LIBERO-PRO, an extended LIBERO benchmark that systematically evaluates model performance under reasonable perturbations across four dimensions: manipulated objects, initial states, task instructions, and environments. Experimental results reveal that, although existing models achieve over 90% accuracy under the standard LIBERO evaluation, their performance collapses to 0.0% under our generalized setting. Crucially, this discrepancy exposes the models' reliance on rote memorization of action sequences and environment layouts from the training set, rather than genuine task understanding or environmental perception. For instance, models persist in executing grasping actions when the target object is replaced with irrelevant items, and their outputs remain unchanged even when given corrupted instructions or even messy tokens. These findings expose the severe flaws in current evaluation practices, and we call on the community to abandon misleading methodologies in favor of robust assessments of model generalization and comprehension. Our code is available at: https://github.com/Zxy-MLlab/LIBERO-PRO.