Robust Skills, Brittle Grounding: Diagnosing Restricted Generalization in Vision-Language Action Policies via Multi-Object Picking

📅 2026-02-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates whether existing vision-language-action (VLA) policies genuinely understand language instructions or merely exploit fixed associations between objects and positions when generalizing out-of-distribution. Through a multi-object grasping task, the authors introduce a task hierarchy and decomposed metrics to separately evaluate motor skill proficiency and instruction-conditioned success rates. By combining object-position randomization, cross-distribution test sets, and systematic experiments on models such as SmolVLA and π₀.₅, they demonstrate that current VLA policies exhibit robust execution of motor primitives but fragile language-to-object grounding. This study presents the first decoupled assessment of VLA policy capabilities, revealing that their generalization bottleneck stems primarily from limitations in language grounding rather than motor control.

Technology Category

Application Category

📝 Abstract
Vision-language action (VLA) policies often report strong manipulation benchmark performance with relatively few demonstrations, but it remains unclear whether this reflects robust language-to-object grounding or reliance on object--location correlations that do not transfer beyond the training distribution. We present a controlled multi-object picking study that progressively increases object placement variability up to full workspace randomization and evaluates held-out object--location pairings that break familiar associations without increasing spatial difficulty. Across these stress tests and data scaling, we find that for representative VLA policies, including SmolVLA and $\pi_{0.5}$, execution of the manipulation primitive remains substantially more reliable than instruction-conditioned task success in harder regimes, suggesting that manipulation skill acquisition is decoupled from instruction following. We recommend augmenting manipulation benchmarks with task ladders and decomposed metrics that separately measure primitive execution and instruction-conditioned success to better diagnose instruction-grounded generalization.
Problem

Research questions and friction points this paper is trying to address.

vision-language action
instruction grounding
generalization
multi-object picking
object-location correlation
Innovation

Methods, ideas, or system contributions that make the work stand out.

vision-language action policies
instruction grounding
generalization diagnosis
multi-object picking
task decomposition
🔎 Similar Papers
No similar papers found.
D
David Emukpere
Naver Labs Europe, 6 Chem. de Maupertuis, Meylan, France
Romain Deffayet
Romain Deffayet
Naver Labs Europe
Reinforcement LearningRecommender SystemsUnbiased Learning to Rank
J
Jean-Michel Renders
Naver Labs Europe, 6 Chem. de Maupertuis, Meylan, France