🤖 AI Summary
To address the data scalability bottleneck in 3D vision-language (3D-VL) models caused by inefficient scene representation, this paper proposes a general-purpose 3D-VL modeling framework tailored for realistic indoor scenes. Methodologically, it introduces: (1) a Compact Feature Grid (CFG) representation to significantly improve point cloud encoding efficiency; (2) SceneDPO—the first post-training objective specifically designed for 3D scene understanding; and (3) a large-scale, high-quality, cross-domain 3D-VL dataset comprising 700K samples, spanning four scene categories and five task types. The framework integrates CFG-based encoding, multi-task joint training, scene-aware instruction fine-tuning, and SceneDPO-based preference optimization. Evaluated on mainstream 3D visual question answering benchmarks—including SQA3D, MSQA, and Beacon3D—the model achieves state-of-the-art performance while reducing token consumption and enhancing robustness and generalization across diverse 3D scenarios.
📝 Abstract
Developing 3D-VL generalists capable of understanding 3D scenes and following natural language instructions to perform a wide range of tasks has been a long-standing goal in the 3D-VL community. Despite recent progress, 3D-VL models still lag behind their 2D counterparts in capability and robustness, falling short of the generalist standard. A key obstacle to developing 3D-VL generalists lies in data scalability, hindered by the lack of an efficient scene representation. We propose LEO-VL, a 3D-VL model built upon condensed feature grid (CFG), an efficient scene representation that bridges 2D perception and 3D spatial structure while significantly reducing token overhead. This efficiency unlocks large-scale training towards 3D-VL generalist, for which we curate over 700k high-quality 3D-VL data spanning four domains of real-world indoor scenes and five tasks such as captioning and dialogue. LEO-VL achieves state-of-the-art performance on a variety of 3D QA benchmarks, including SQA3D, MSQA, and Beacon3D. Ablation studies confirm the efficiency of our representation, the importance of task and scene diversity, and the validity of our data curation principle. Furthermore, we introduce SceneDPO, a novel post-training objective that enhances the robustness of 3D-VL models. We hope our findings contribute to the advancement of scalable and robust 3D-VL generalists.