Think with 3D: Geometric Imagination Grounded Spatial Reasoning from Limited Views

πŸ“… 2025-10-21
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Vision-language models (VLMs) struggle to reason about 3D spatial relationships under limited-view conditions due to their inherent 2D bias and lack of explicit 3D grounding. Method: This paper introduces 3DThinkerβ€”a novel framework that enables human-like implicit 3D mental simulation and spatial reasoning without requiring explicit 3D inputs, 3D annotations, or prior geometric knowledge. Its core innovation lies in latent-space alignment, which implicitly bridges VLM representations with those of 3D foundation models (e.g., VGGT), unifying 3D understanding and multimodal reasoning. It employs a two-stage training strategy: first, supervised alignment of 3D latent spaces; second, end-to-end optimization of the reasoning pathway. Contribution/Results: On multiple 3D spatial reasoning benchmarks, 3DThinker significantly outperforms strong baselines, demonstrating superior effectiveness, generalization across unseen configurations, and capacity for geometric imagination modeling.

Technology Category

Application Category

πŸ“ Abstract
Though recent advances in vision-language models (VLMs) have achieved remarkable progress across a wide range of multimodal tasks, understanding 3D spatial relationships from limited views remains a significant challenge. Previous reasoning methods typically rely on pure text (e.g., topological cognitive maps) or on 2D visual cues. However, their limited representational capacity hinders performance in specific tasks that require 3D spatial imagination. To address this limitation, we propose 3DThinker, a framework that can effectively exploits the rich geometric information embedded within images while reasoning, like humans do. Our framework is the first to enable 3D mentaling during reasoning without any 3D prior input, and it does not rely on explicitly labeled 3D data for training. Specifically, our training consists of two stages. First, we perform supervised training to align the 3D latent generated by VLM while reasoning with that of a 3D foundation model (e.g., VGGT). Then, we optimize the entire reasoning trajectory solely based on outcome signals, thereby refining the underlying 3D mentaling. Extensive experiments across multiple benchmarks show that 3DThinker consistently outperforms strong baselines and offers a new perspective toward unifying 3D representations into multimodal reasoning. Our code will be available at https://github.com/zhangquanchen/3DThinker.
Problem

Research questions and friction points this paper is trying to address.

Enabling 3D spatial reasoning from limited 2D views
Overcoming limitations of 2D visual cues in spatial imagination
Developing 3D mental models without requiring 3D training data
Innovation

Methods, ideas, or system contributions that make the work stand out.

3DThinker framework enables 3D reasoning without 3D input
Aligns 3D latent representations with foundation models
Optimizes reasoning trajectory using outcome signals only
πŸ”Ž Similar Papers
No similar papers found.