GPT4Scene: Understand 3D Scenes from Videos with Vision-Language Models

📅 2025-01-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current vision-language models (VLMs) exhibit significant limitations in 3D spatial understanding—particularly in cross-scale alignment between global structure and local details—hindering zero-shot reasoning over indoor 3D scenes. To address this, we propose the first purely visual, bird’s-eye-view (BEV)-guided global-local alignment framework, eliminating reliance on point clouds or multi-view geometry. We introduce a novel visual prompting paradigm that jointly tokenizes BEV images and video frames, systematically exposing and rectifying VLMs’ fundamental deficits in spatial correspondence. Fine-tuned on 165K high-quality annotated video clips, our model acquires intrinsic 3D understanding capabilities and generalizes zero-shot without explicit prompting. It achieves state-of-the-art performance across multiple 3D understanding benchmarks, surpassing GPT-4o in zero-shot settings. We further release a non-intrusive, plug-and-play training framework compatible with arbitrary open-source VLMs.

Technology Category

Application Category

📝 Abstract
In recent years, 2D Vision-Language Models (VLMs) have made significant strides in image-text understanding tasks. However, their performance in 3D spatial comprehension, which is critical for embodied intelligence, remains limited. Recent advances have leveraged 3D point clouds and multi-view images as inputs, yielding promising results. However, we propose exploring a purely vision-based solution inspired by human perception, which merely relies on visual cues for 3D spatial understanding. This paper empirically investigates the limitations of VLMs in 3D spatial knowledge, revealing that their primary shortcoming lies in the lack of global-local correspondence between the scene and individual frames. To address this, we introduce GPT4Scene, a novel visual prompting paradigm in VLM training and inference that helps build the global-local relationship, significantly improving the 3D spatial understanding of indoor scenes. Specifically, GPT4Scene constructs a 3D Bird's Eye View (BEV) image from the video and marks consistent object IDs across both frames and the BEV image. The model then inputs the concatenated BEV image and video frames with markers. In zero-shot evaluations, GPT4Scene improves performance over closed-source VLMs like GPT-4o. Additionally, we prepare a processed video dataset consisting of 165K text annotation to fine-tune open-source VLMs, achieving state-of-the-art performance on all 3D understanding tasks. Surprisingly, after training with the GPT4Scene paradigm, VLMs consistently improve during inference, even without visual prompting and BEV image as explicit correspondence. It demonstrates that the proposed paradigm helps VLMs develop an intrinsic ability to understand 3D scenes, which paves the way for a noninvasive approach to extending pre-trained VLMs for 3D scene understanding.
Problem

Research questions and friction points this paper is trying to address.

3D Spatial Understanding
Visual Language Models
Indoor Scenes
Innovation

Methods, ideas, or system contributions that make the work stand out.

GPT4Scene
3D scene understanding
Visual Language Models (VLMs)
🔎 Similar Papers
No similar papers found.