🤖 AI Summary
Current Vision-Language-Action (VLA) models tightly couple perception and action within a single optimization pipeline, resulting in weak language grounding and poor robustness in real-world desktop settings—particularly under cluttered backgrounds, target occlusion, and appearance overfitting. To address this, we propose OBEYED-VLA, the first VLA framework integrating dual grounding: object-centric grounding via vision-language model (VLM)-guided object region selection, and geometric-structural grounding via multi-view 3D reconstruction and 3D-structure-prioritized representation learning. This enables explicit decoupling of perception and action modules. Furthermore, we fine-tune a pre-trained VLA policy on clean, single-object demonstrations. Experiments on the UR10e robotic platform demonstrate significant robustness improvements across four challenging scenarios: distractor interference, target occlusion, background variation, and dense unknown-object manipulation. Ablation studies confirm that both grounding mechanisms are indispensable.
📝 Abstract
Recent Vision-Language-Action (VLA) models have made impressive progress toward general-purpose robotic manipulation by post-training large Vision-Language Models (VLMs) for action prediction. Yet most VLAs entangle perception and control in a monolithic pipeline optimized purely for action, which can erode language-conditioned grounding. In our real-world tabletop tests, policies over-grasp when the target is absent, are distracted by clutter, and overfit to background appearance.
To address these issues, we propose OBEYED-VLA (OBject-centric and gEometrY groundED VLA), a framework that explicitly disentangles perceptual grounding from action reasoning. Instead of operating directly on raw RGB, OBEYED-VLA augments VLAs with a perception module that grounds multi-view inputs into task-conditioned, object-centric, and geometry-aware observations. This module includes a VLM-based object-centric grounding stage that selects task-relevant object regions across camera views, along with a complementary geometric grounding stage that emphasizes the 3D structure of these objects over their appearance. The resulting grounded views are then fed to a pretrained VLA policy, which we fine-tune exclusively on single-object demonstrations collected without environmental clutter or non-target objects.
On a real-world UR10e tabletop setup, OBEYED-VLA substantially improves robustness over strong VLA baselines across four challenging regimes and multiple difficulty levels: distractor objects, absent-target rejection, background appearance changes, and cluttered manipulation of unseen objects. Ablation studies confirm that both semantic grounding and geometry-aware grounding are critical to these gains. Overall, the results indicate that making perception an explicit, object-centric component is an effective way to strengthen and generalize VLA-based robotic manipulation.