3D-Aware Vision-Language Models Fine-Tuning with Geometric Distillation

📅 2025-06-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Vision-language models (VLMs) exhibit fundamental limitations in 3D spatial understanding. To address this, we propose an unsupervised, architecture-agnostic geometric knowledge distillation framework that implicitly injects geometric priors—such as sparse correspondences, relative depth estimates, and dense cost volumes—extracted from 3D foundation models (e.g., MASt3R, VGGT) into the pre-trained visual-language representation space of 2D VLMs. Our approach requires neither architectural modifications nor 3D ground-truth annotations. It jointly leverages contrastive distillation and cross-modal feature alignment to enhance 3D structural reasoning while preserving the VLM’s original multimodal capabilities. Extensive evaluation on multiple 3D vision-language reasoning and perception benchmarks demonstrates substantial improvements: spatial reasoning accuracy increases significantly, and computational overhead is reduced by over 40%. To our knowledge, this is the first method achieving lightweight, general-purpose, and efficient knowledge-transfer-based 3D enhancement for VLMs.

Technology Category

Application Category

📝 Abstract
Vision-Language Models (VLMs) have shown remarkable performance on diverse visual and linguistic tasks, yet they remain fundamentally limited in their understanding of 3D spatial structures. We propose Geometric Distillation, a lightweight, annotation-free fine-tuning framework that injects human-inspired geometric cues into pretrained VLMs without modifying their architecture. By distilling (1) sparse correspondences, (2) relative depth relations, and (3) dense cost volumes from off-the-shelf 3D foundation models (e.g., MASt3R, VGGT), our method shapes representations to be geometry-aware while remaining compatible with natural image-text inputs. Through extensive evaluations on 3D vision-language reasoning and 3D perception benchmarks, our method consistently outperforms prior approaches, achieving improved 3D spatial reasoning with significantly lower computational cost. Our work demonstrates a scalable and efficient path to bridge 2D-trained VLMs with 3D understanding, opening up wider use in spatially grounded multimodal tasks.
Problem

Research questions and friction points this paper is trying to address.

Enhancing 3D spatial understanding in Vision-Language Models
Injecting geometric cues without architectural changes
Improving 3D reasoning with lower computational cost
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lightweight fine-tuning with geometric distillation
Injects 3D cues without architecture changes
Uses sparse, depth, and dense 3D features
🔎 Similar Papers
No similar papers found.