MLLMs Need 3D-Aware Representation Supervision for Scene Understanding

📅 2025-06-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current multimodal large language models (MLLMs) suffer from weak 3D representation capabilities due to the absence of 3D pretraining, limiting their performance in 3D reasoning for scene understanding. To address this, we propose 3DRS—a novel framework that (1) empirically establishes, for the first time, a strong positive correlation between MLLMs’ 3D representation quality and downstream task performance; and (2) introduces a 3D-annotation-free cross-modal feature alignment paradigm via knowledge distillation from pretrained 3D foundation models, enabling explicit alignment between visual encoders and 3D geometric semantics. Our method integrates 3D feature distillation, contrastive learning, and multi-view consistency modeling, and is compatible with mainstream MLLMs—including Qwen-VL, LLaVA, and InternVL—for fine-tuning. On benchmarks such as ScanNet and 3D-FRONT, 3DRS achieves consistent improvements across vision-language tasks—e.g., visual grounding, image captioning, and visual question answering—with average gains of 4.2%–7.8%.

Technology Category

Application Category

📝 Abstract
Recent advances in scene understanding have leveraged multimodal large language models (MLLMs) for 3D reasoning by capitalizing on their strong 2D pretraining. However, the lack of explicit 3D data during MLLM pretraining limits 3D representation capability. In this paper, we investigate the 3D-awareness of MLLMs by evaluating multi-view correspondence and reveal a strong positive correlation between the quality of 3D-aware representation and downstream task performance. Motivated by this, we propose 3DRS, a framework that enhances MLLM 3D representation learning by introducing supervision from pretrained 3D foundation models. Our approach aligns MLLM visual features with rich 3D knowledge distilled from 3D models, effectively improving scene understanding. Extensive experiments across multiple benchmarks and MLLMs -- including visual grounding, captioning, and question answering -- demonstrate consistent performance gains. Project page: https://visual-ai.github.io/3drs
Problem

Research questions and friction points this paper is trying to address.

MLLMs lack 3D data during pretraining limiting 3D representation
3D-aware representation quality correlates with downstream task performance
Proposing 3DRS to enhance MLLM 3D learning via 3D supervision
Innovation

Methods, ideas, or system contributions that make the work stand out.

Supervises MLLMs with 3D foundation models
Aligns visual features with 3D knowledge
Improves scene understanding via 3D-aware representation
🔎 Similar Papers
No similar papers found.
Xiaohu Huang
Xiaohu Huang
The University of HongKong
computer visionvideo analysis
J
Jingjing Wu
Department of Computer Vision Technology (VIS), Baidu Inc.
Qunyi Xie
Qunyi Xie
Baidu VIS
OCR、MLLM
K
Kai Han
Visual AI Lab, The University of Hong Kong