🤖 AI Summary
Current vision-language-action (VLA) models perform well on standard benchmarks yet commonly ignore language instructions and lack systematic evaluation of their language comprehension capabilities. This work proposes LangGap, a novel benchmark that introduces the first diagnostic framework based on four-dimensional semantic perturbations to generate diverse, semantically sensitive manipulation instructions within fixed environments, thereby compelling models to rely on genuine language understanding to complete tasks. Through targeted data augmentation and tailored training strategies, task success rates improve dramatically—from 0% to 90% in single-task settings and from 0% to 28% in multi-task scenarios. Experimental results reveal a sharp performance degradation in existing VLA models as semantic diversity increases, exposing fundamental deficiencies in their language understanding.
📝 Abstract
Vision-Language-Action (VLA) models achieve over 95% success on standard benchmarks. However, through systematic experiments, we find that current state-of-the-art VLA models largely ignore language instructions. Prior work lacks: (1) systematic semantic perturbation diagnostics, (2) a benchmark that forces language understanding by design, and (3) linguistically diverse training data.
This paper constructs the LangGap benchmark, based on a four-dimensional semantic perturbation method -- varying instruction semantics while keeping the tabletop layout fixed -- revealing language understanding deficits in π0.5. Existing benchmarks like LIBERO assign only one task per layout, underutilizing available objects and target locations; LangGap fully diversifies pick-and-place tasks under identical layouts, forcing models to truly understand language.
Experiments show that targeted data augmentation can partially close the language gap -- success rate improves from 0% to 90% with single-task training, and 0% to 28% with multi-task training. However, as semantic diversity of extended tasks increases, model learning capacity proves severely insufficient; even trained tasks perform poorly. This reveals a fundamental challenge for VLA models in understanding diverse language instructions -- precisely the long-term value of LangGap.