VLM-3R: Vision-Language Models Augmented with Instruction-Aligned 3D Reconstruction

📅 2025-05-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses core challenges in extending large multimodal models (LMMs) to 3D scene understanding from monocular video input—namely, implicit spatial modeling, weak temporal reasoning, and lack of instruction alignment. We propose the first instruction-aligned visual language model enhanced with 3D reconstruction capabilities: (1) a geometric encoder generates implicit 3D tokens; (2) a spatial-visual-view fusion mechanism enables cross-modal spatiotemporal representation learning; and (3) a novel 3D reconstruction instruction-tuning paradigm, trained on over 200K synthetic instruction-response pairs. We further introduce the first comprehensive benchmark for vision–space–temporal intelligence, comprising 138.6K QA items. Evaluated across five spatiotemporal relational reasoning tasks, our model achieves state-of-the-art accuracy and generalization, significantly advancing monocular 3D scene understanding and embodied reasoning under depth-sensor-free conditions.

Technology Category

Application Category

📝 Abstract
The rapid advancement of Large Multimodal Models (LMMs) for 2D images and videos has motivated extending these models to understand 3D scenes, aiming for human-like visual-spatial intelligence. Nevertheless, achieving deep spatial understanding comparable to human capabilities poses significant challenges in model encoding and data acquisition. Existing methods frequently depend on external depth sensors for geometry capture or utilize off-the-shelf algorithms for pre-constructing 3D maps, thereby limiting their scalability, especially with prevalent monocular video inputs and for time-sensitive applications. In this work, we introduce VLM-3R, a unified framework for Vision-Language Models (VLMs) that incorporates 3D Reconstructive instruction tuning. VLM-3R processes monocular video frames by employing a geometry encoder to derive implicit 3D tokens that represent spatial understanding. Leveraging our Spatial-Visual-View Fusion and over 200K curated 3D reconstructive instruction tuning question-answer (QA) pairs, VLM-3R effectively aligns real-world spatial context with language instructions. This enables monocular 3D spatial assistance and embodied reasoning. To facilitate the evaluation of temporal reasoning, we introduce the Vision-Spatial-Temporal Intelligence benchmark, featuring over 138.6K QA pairs across five distinct tasks focused on evolving spatial relationships. Extensive experiments demonstrate that our model, VLM-3R, not only facilitates robust visual-spatial reasoning but also enables the understanding of temporal 3D context changes, excelling in both accuracy and scalability.
Problem

Research questions and friction points this paper is trying to address.

Extending 2D vision-language models to understand 3D scenes
Overcoming limitations in 3D data acquisition and model encoding
Enabling monocular 3D spatial assistance and embodied reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Geometry encoder for implicit 3D tokens
Spatial-Visual-View Fusion technique
200K 3D reconstructive instruction QA pairs
🔎 Similar Papers
No similar papers found.