From Spatial to Actions: Grounding Vision-Language-Action Model in Spatial Foundation Priors

📅 2025-10-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current vision-language-action (VLA) models predominantly rely on 2D visual encoders, exhibiting limited 3D spatial reasoning capability—hindering their generalization and robustness in real-world embodied settings. To address this, we propose FALCON: a novel paradigm that injects geometry-prior-driven 3D spatial tokens directly into the action head, enabling a lightweight, embodied spatial model. By incorporating depth or pose information, FALCON enhances spatial representation without fine-tuning the backbone, while preserving language–multimodal alignment. Our method leverages a spatial foundation model to extract geometric priors from RGB inputs, decoupling spatial enhancement from visual-language understanding. Evaluated on three simulation benchmarks and eleven real-world robotic tasks, FALCON achieves state-of-the-art performance. It demonstrates显著 robustness to occlusion, variations in spatial prompting, and object-scale discrepancies.

Technology Category

Application Category

📝 Abstract
Existing vision-language-action (VLA) models act in 3D real-world but are typically built on 2D encoders, leaving a spatial reasoning gap that limits generalization and adaptability. Recent 3D integration techniques for VLAs either require specialized sensors and transfer poorly across modalities, or inject weak cues that lack geometry and degrade vision-language alignment. In this work, we introduce FALCON (From Spatial to Action), a novel paradigm that injects rich 3D spatial tokens into the action head. FALCON leverages spatial foundation models to deliver strong geometric priors from RGB alone, and includes an Embodied Spatial Model that can optionally fuse depth, or pose for higher fidelity when available, without retraining or architectural changes. To preserve language reasoning, spatial tokens are consumed by a Spatial-Enhanced Action Head rather than being concatenated into the vision-language backbone. These designs enable FALCON to address limitations in spatial representation, modality transferability, and alignment. In comprehensive evaluations across three simulation benchmarks and eleven real-world tasks, our proposed FALCON achieves state-of-the-art performance, consistently surpasses competitive baselines, and remains robust under clutter, spatial-prompt conditioning, and variations in object scale and height.
Problem

Research questions and friction points this paper is trying to address.

Addressing spatial reasoning gap in vision-language-action models
Improving 3D representation without specialized sensors or modality transfer issues
Preserving vision-language alignment while enhancing geometric understanding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Injects 3D spatial tokens into action head
Uses spatial foundation models from RGB alone
Spatial tokens processed separately to preserve language alignment
🔎 Similar Papers
No similar papers found.