GLaD: Geometric Latent Distillation for Vision-Language-Action Models

📅 2025-12-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current Vision-Language-Action (VLA) models over-rely on RGB inputs and lack geometric priors essential for spatial reasoning and fine-grained manipulation. To address this, we propose a geometric implicit distillation framework that, for the first time, directly transfers 3D geometric knowledge into the visual token hidden states of large language models—not merely into the visual encoder. Our method operates solely on RGB data, requiring neither depth sensors nor 3D annotations. Key components include a geometry-aware visual Transformer (VGGT), cross-modal hidden-state alignment, and an RGB-only pretraining strategy. After pretraining on the Bridge dataset, our model achieves a 94.1% average success rate across the four LIBERO benchmark task suites—surpassing UniVLA (92.5%)—demonstrating substantial improvements in spatial reasoning capability and policy generalization.

Technology Category

Application Category

📝 Abstract
Most existing Vision-Language-Action (VLA) models rely primarily on RGB information, while ignoring geometric cues crucial for spatial reasoning and manipulation. In this work, we introduce GLaD, a geometry-aware VLA framework that incorporates 3D geometric priors during pretraining through knowledge distillation. Rather than distilling geometric features solely into the vision encoder, we align the LLM's hidden states corresponding to visual tokens with features from a frozen geometry-aware vision transformer (VGGT), ensuring that geometric understanding is deeply integrated into the multimodal representations that drive action prediction. Pretrained on the Bridge dataset with this geometry distillation mechanism, GLaD achieves 94.1% average success rate across four LIBERO task suites, outperforming UniVLA (92.5%) which uses identical pretraining data. These results validate that geometry-aware pretraining enhances spatial reasoning and policy generalization without requiring explicit depth sensors or 3D annotations.
Problem

Research questions and friction points this paper is trying to address.

Incorporates 3D geometric priors into vision-language-action models for spatial reasoning
Aligns LLM hidden states with geometry-aware vision features via knowledge distillation
Enhances spatial reasoning and policy generalization without depth sensors or 3D annotations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Incorporates 3D geometric priors via knowledge distillation
Aligns LLM hidden states with geometry-aware vision features
Enhances spatial reasoning without depth sensors or 3D annotations
🔎 Similar Papers