LEO-VL: Towards 3D Vision-Language Generalists via Data Scaling with Efficient Representation

📅 2025-06-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the data scalability bottleneck in 3D vision-language (3D-VL) models caused by inefficient scene representation, this paper proposes a general-purpose 3D-VL modeling framework tailored for realistic indoor scenes. Methodologically, it introduces: (1) a Compact Feature Grid (CFG) representation to significantly improve point cloud encoding efficiency; (2) SceneDPO—the first post-training objective specifically designed for 3D scene understanding; and (3) a large-scale, high-quality, cross-domain 3D-VL dataset comprising 700K samples, spanning four scene categories and five task types. The framework integrates CFG-based encoding, multi-task joint training, scene-aware instruction fine-tuning, and SceneDPO-based preference optimization. Evaluated on mainstream 3D visual question answering benchmarks—including SQA3D, MSQA, and Beacon3D—the model achieves state-of-the-art performance while reducing token consumption and enhancing robustness and generalization across diverse 3D scenarios.

Technology Category

Application Category

📝 Abstract
Developing 3D-VL generalists capable of understanding 3D scenes and following natural language instructions to perform a wide range of tasks has been a long-standing goal in the 3D-VL community. Despite recent progress, 3D-VL models still lag behind their 2D counterparts in capability and robustness, falling short of the generalist standard. A key obstacle to developing 3D-VL generalists lies in data scalability, hindered by the lack of an efficient scene representation. We propose LEO-VL, a 3D-VL model built upon condensed feature grid (CFG), an efficient scene representation that bridges 2D perception and 3D spatial structure while significantly reducing token overhead. This efficiency unlocks large-scale training towards 3D-VL generalist, for which we curate over 700k high-quality 3D-VL data spanning four domains of real-world indoor scenes and five tasks such as captioning and dialogue. LEO-VL achieves state-of-the-art performance on a variety of 3D QA benchmarks, including SQA3D, MSQA, and Beacon3D. Ablation studies confirm the efficiency of our representation, the importance of task and scene diversity, and the validity of our data curation principle. Furthermore, we introduce SceneDPO, a novel post-training objective that enhances the robustness of 3D-VL models. We hope our findings contribute to the advancement of scalable and robust 3D-VL generalists.
Problem

Research questions and friction points this paper is trying to address.

Developing 3D-VL generalists for diverse tasks
Overcoming data scalability with efficient representation
Enhancing robustness and performance in 3D-VL models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Condensed feature grid for efficient 3D representation
Large-scale 700k 3D-VL dataset curation
SceneDPO post-training for enhanced robustness
🔎 Similar Papers
No similar papers found.
Jiangyong Huang
Jiangyong Huang
Peking University
Computer VisionArtificial Intelligence
Xiaojian Ma
Xiaojian Ma
University of California, Los Angeles
Computer VisionMachine LearningGenerative ModelingReinforcement Learning
X
Xiongkun Linghu
State Key Laboratory of General Artificial Intelligence, BIGAI
Y
Yue Fan
State Key Laboratory of General Artificial Intelligence, BIGAI
J
Junchao He
State Key Laboratory of General Artificial Intelligence, BIGAI, Beijing University of Posts and Telecommunications
W
Wenxin Tan
Tsinghua University
Q
Qing Li
State Key Laboratory of General Artificial Intelligence, BIGAI
S
Song-Chun Zhu
Peking University, State Key Laboratory of General Artificial Intelligence, BIGAI, Tsinghua University
Y
Yixin Chen
State Key Laboratory of General Artificial Intelligence, BIGAI
Baoxiong Jia
Baoxiong Jia
Ph.D. in Computer Science, UCLA
Computer VisionArtificial Intelligence
S
Siyuan Huang
State Key Laboratory of General Artificial Intelligence, BIGAI