LLMI3D: MLLM-based 3D Perception from a Single 2D Image

πŸ“… 2024-08-14
πŸ“ˆ Citations: 3
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing small-scale 3D-aware models suffer from poor generalization, while multimodal large language models (MLLMs) exhibit limited capability in open-scene single-image 3D perception due to weak local 3D spatial understanding, inaccurate geometric numerical output, and poor adaptability to varying camera focal lengths. To address these limitations, we propose the first MLLM adaptation framework for general-purpose 3D perception. Our method enhances fine-grained geometric perception via spatially augmented local feature extraction, introduces 3D query tokens to guide geometric decoding, and designs a geometric-projection-based 3D reasoning mechanism that explicitly models camera parameters and spatial relationships. We employ parameter-efficient fine-tuning and train on IG3Dβ€”a newly constructed fine-grained image-geometry-text QA datasetβ€”to enable joint vision-language-geometry modeling. Our approach achieves state-of-the-art performance across multiple 3D benchmarks, notably outperforming prior methods in cross-focal-length, open-vocabulary object, and complex-layout scenarios.

Technology Category

Application Category

πŸ“ Abstract
Recent advancements in autonomous driving, augmented reality, robotics, and embodied intelligence have necessitated 3D perception algorithms. However, current 3D perception methods, especially specialized small models, exhibit poor generalization in open scenarios. On the other hand, multimodal large language models (MLLMs) excel in general capacity but underperform in 3D tasks, due to weak 3D local spatial object perception, poor text-based geometric numerical output, and inability to handle camera focal variations. To address these challenges, we propose the following solutions: Spatial-Enhanced Local Feature Mining for better spatial feature extraction, 3D Query Token-Derived Info Decoding for precise geometric regression, and Geometry Projection-Based 3D Reasoning for handling camera focal length variations. We employ parameter-efficient fine-tuning for a pre-trained MLLM and develop LLMI3D, a powerful 3D perception MLLM. Additionally, we have constructed the IG3D dataset, which provides fine-grained descriptions and question-answer annotations. Extensive experiments demonstrate that our LLMI3D achieves state-of-the-art performance, outperforming other methods by a large margin.
Problem

Research questions and friction points this paper is trying to address.

Enhance 3D perception from single 2D images
Improve MLLMs' 3D spatial and geometric understanding
Handle camera focal length variations effectively
Innovation

Methods, ideas, or system contributions that make the work stand out.

Spatial-Enhanced Local Feature Mining
3D Query Token-Derived Info Decoding
Geometry Projection-Based 3D Reasoning
πŸ”Ž Similar Papers
F
Fan Yang
School of Software, Tsinghua University, Beijing, China; BNRist, Tsinghua University, Beijing, China
Sicheng Zhao
Sicheng Zhao
Tsinghua University
Affective ComputingMultimediaDomain AdaptationComputer Vision
Y
Yanhao Zhang
OPPO AI Center, Shenzhen China
H
Haoxiang Chen
OPPO AI Center, Shenzhen China
H
Hui Chen
BNRist, Tsinghua University, Beijing, China
W
Wenbo Tang
NavInfo, Beijing, China
H
Haonan Lu
OPPO AI Center, Shenzhen China
P
Pengfei Xu
NavInfo, Beijing, China
Z
Zhenyu Yang
OPPO AI Center, Shenzhen China
Jungong Han
Jungong Han
Chair Professor in Computer Vision, University of Sheffield, UK, FIAPR, FAAIA
Computer VisionVideo AnalyticsMachine Learning
Guiguang Ding
Guiguang Ding
Tsinghua University
Computer VisionMultimedia Retrieval