🤖 AI Summary
Existing large multimodal models (LMMs) for 3D scene understanding suffer from reliance on text-only supervision, lacking explicit geometric modeling capability. To address this, we propose a geometry-aware reconstruction-based instruction tuning framework, introducing the novel “dual-supervision” paradigm: 3D point cloud geometry serves both as input and as an explicit learning target, enabling joint optimization of object-level and frame-level reconstruction tasks. Our method employs a dual-encoder architecture that fuses point cloud features with multimodal semantics, augmented by geometric consistency constraints and a multi-task learning mechanism to induce robust spatial representation learning. Extensive experiments demonstrate significant performance gains across four major 3D vision-language benchmarks—ScanQA, Scan2Cap, ScanRefer, and SQA3D—validating the framework’s effectiveness and generalizability for 3D spatial reasoning.
📝 Abstract
The rapid development of Large Multimodal Models (LMMs) has led to remarkable progress in 2D visual understanding; however, extending these capabilities to 3D scene understanding remains a significant challenge. Existing approaches predominantly rely on text-only supervision, which fails to provide the geometric constraints required for learning robust 3D spatial representations. In this paper, we introduce Reg3D, a novel Reconstructive Geometry Instruction Tuning framework that addresses this limitation by incorporating geometry-aware supervision directly into the training process. Our key insight is that effective 3D understanding necessitates reconstructing underlying geometric structures rather than merely describing them. Unlike existing methods that inject 3D information solely at the input level, Reg3D adopts a dual-supervision paradigm that leverages 3D geometric information both as input and as explicit learning targets. Specifically, we design complementary object-level and frame-level reconstruction tasks within a dual-encoder architecture, enforcing geometric consistency to encourage the development of spatial reasoning capabilities. Extensive experiments on ScanQA, Scan2Cap, ScanRefer, and SQA3D demonstrate that Reg3D delivers substantial performance improvements, establishing a new training paradigm for spatially aware multimodal models.