SpatialBot: Precise Spatial Understanding with Vision Language Models

📅 2024-06-19
🏛️ arXiv.org
📈 Citations: 14
Influential: 4
📄 PDF
🤖 AI Summary
Current vision-language models (VLMs) excel at 2D image understanding but exhibit significant limitations in depth perception and 3D spatial reasoning—capabilities essential for embodied intelligence. To address this gap, we propose SpatialVLM: a novel framework introducing the first RGB–depth multimodal training paradigm; SpatialQA, the first multi-level depth-aware question-answering dataset explicitly designed for spatial understanding; and SpatialBench, a comprehensive evaluation benchmark spanning geometric reasoning, spatial relation modeling, and logical inference. Experiments demonstrate that SpatialVLM achieves substantial gains over state-of-the-art VLMs on spatial understanding tasks, while also delivering consistent improvements on general multimodal benchmarks (MMBench) and embodied AI benchmarks (ALFRED). All code, models, and datasets are publicly released.

Technology Category

Application Category

📝 Abstract
Vision Language Models (VLMs) have achieved impressive performance in 2D image understanding, however they are still struggling with spatial understanding which is the foundation of Embodied AI. In this paper, we propose SpatialBot for better spatial understanding by feeding both RGB and depth images. Additionally, we have constructed the SpatialQA dataset, which involves multi-level depth-related questions to train VLMs for depth understanding. Finally, we present SpatialBench to comprehensively evaluate VLMs' capabilities in spatial understanding at different levels. Extensive experiments on our spatial-understanding benchmark, general VLM benchmarks and Embodied AI tasks, demonstrate the remarkable improvements of SpatialBot trained on SpatialQA. The model, code and data are available at https://github.com/BAAI-DCAI/SpatialBot.
Problem

Research questions and friction points this paper is trying to address.

Enhance spatial understanding in Vision Language Models (VLMs).
Develop SpatialBot using RGB and depth images for better spatial comprehension.
Create SpatialQA dataset and SpatialBench for training and evaluating VLMs.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses RGB and depth images for spatial understanding
Introduces SpatialQA dataset for depth-related training
Develops SpatialBench for evaluating spatial understanding
🔎 Similar Papers
No similar papers found.
Wenxiao Cai
Wenxiao Cai
Stanford University
Y
Yaroslav Ponomarenko
Peking University
Jianhao Yuan
Jianhao Yuan
University of Oxford
Deep LearningRoboticsComputer Vision
X
Xiaoqi Li
Peking University
W
Wankou Yang
Southeast University
H
Hao Dong
Peking University
B
Bo Zhao
Shanghai Jiao Tong University, BAAI