GAP-MLLM: Geometry-Aligned Pre-training for Activating 3D Spatial Perception in Multimodal Large Language Models

📅 2026-03-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing multimodal large language models struggle to effectively activate 3D spatial awareness when relying solely on RGB inputs, limiting their performance on 3D tasks. To address this, this work introduces geometric perception as a pretraining objective and proposes a geometry-aligned pretraining paradigm. The approach jointly drives the model—via visual prompts—to simultaneously predict sparse point maps and semantic labels, while a multi-level progressive fusion module with token-level gating enables adaptive integration of geometric and semantic information. Evaluated on tasks including 3D visual grounding, dense captioning, and video object detection, the method achieves significant performance gains, demonstrating its effectiveness in enhancing spatial perception and fusing geometric features within multimodal architectures.

Technology Category

Application Category

📝 Abstract
Multimodal Large Language Models (MLLMs) demonstrate exceptional semantic reasoning but struggle with 3D spatial perception when restricted to pure RGB inputs. Despite leveraging implicit geometric priors from 3D reconstruction models, image-based methods still exhibit a notable performance gap compared to methods using explicit 3D data. We argue that this gap does not arise from insufficient geometric priors, but from a misalignment in the training paradigm: text-dominated fine-tuning fails to activate geometric representations within MLLMs. Existing approaches typically resort to naive feature concatenation and optimize directly for downstream tasks without geometry-specific supervision, leading to suboptimal structural utilization. To address this limitation, we propose GAP-MLLM, a Geometry-Aligned Pre-training paradigm that explicitly activates structural perception before downstream adaptation. Specifically, we introduce a visual-prompted joint task that compels the MLLMs to predict sparse pointmaps alongside semantic labels, thereby enforcing geometric awareness. Furthermore, we design a multi-level progressive fusion module with a token-level gating mechanism, enabling adaptive integration of geometric priors without suppressing semantic reasoning. Extensive experiments demonstrate that GAP-MLLM significantly enhances geometric feature fusion and consistently enhances performance across 3D visual grounding, 3D dense captioning, and 3D video object detection tasks.
Problem

Research questions and friction points this paper is trying to address.

3D spatial perception
Multimodal Large Language Models
geometry-aligned pre-training
geometric representation
RGB-based 3D understanding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Geometry-Aligned Pre-training
3D Spatial Perception
Multimodal Large Language Models
Visual-Prompted Joint Task
Progressive Fusion
🔎 Similar Papers
No similar papers found.