4D LangSplat: 4D Language Gaussian Splatting via Multimodal Large Language Models

📅 2025-03-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of modeling time-sensitive, open-vocabulary language queries in dynamic 4D scenes. Existing CLIP-based approaches (e.g., LangSplat) are restricted to static 3D settings and fail to capture semantic evolution of objects over time. We propose the first video-oriented 4D language field framework, introducing two key innovations: (1) an MLLM-driven object-level video captioning module that generates temporally consistent, object-specific, pixel-aligned language supervision—bypassing conventional visual feature learning; and (2) a state-aware deformable network that ensures temporal coherence. Integrated with 4D Gaussian Splatting, our method jointly encodes spatiotemporal geometry and semantics. Evaluated on multiple benchmarks, it significantly outperforms CLIP-based baselines, enabling fine-grained, time-sensitive semantic localization and cross-frame semantic tracking. To our knowledge, this is the first framework achieving efficient and precise open-vocabulary 4D language querying.

Technology Category

Application Category

📝 Abstract
Learning 4D language fields to enable time-sensitive, open-ended language queries in dynamic scenes is essential for many real-world applications. While LangSplat successfully grounds CLIP features into 3D Gaussian representations, achieving precision and efficiency in 3D static scenes, it lacks the ability to handle dynamic 4D fields as CLIP, designed for static image-text tasks, cannot capture temporal dynamics in videos. Real-world environments are inherently dynamic, with object semantics evolving over time. Building a precise 4D language field necessitates obtaining pixel-aligned, object-wise video features, which current vision models struggle to achieve. To address these challenges, we propose 4D LangSplat, which learns 4D language fields to handle time-agnostic or time-sensitive open-vocabulary queries in dynamic scenes efficiently. 4D LangSplat bypasses learning the language field from vision features and instead learns directly from text generated from object-wise video captions via Multimodal Large Language Models (MLLMs). Specifically, we propose a multimodal object-wise video prompting method, consisting of visual and text prompts that guide MLLMs to generate detailed, temporally consistent, high-quality captions for objects throughout a video. These captions are encoded using a Large Language Model into high-quality sentence embeddings, which then serve as pixel-aligned, object-specific feature supervision, facilitating open-vocabulary text queries through shared embedding spaces. Recognizing that objects in 4D scenes exhibit smooth transitions across states, we further propose a status deformable network to model these continuous changes over time effectively. Our results across multiple benchmarks demonstrate that 4D LangSplat attains precise and efficient results for both time-sensitive and time-agnostic open-vocabulary queries.
Problem

Research questions and friction points this paper is trying to address.

Handles time-sensitive language queries in dynamic scenes.
Learns 4D language fields from object-wise video captions.
Models continuous object state transitions in 4D scenes.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Multimodal Large Language Models for 4D language fields
Generates object-wise video captions for dynamic scene analysis
Implements status deformable network for smooth temporal transitions
🔎 Similar Papers
No similar papers found.
Wanhua Li
Wanhua Li
Harvard University
Computer VisionPattern Recognition
R
Renping Zhou
Harvard University, Tsinghua University
J
Jiawei Zhou
Stony Brook University
Y
Yingwei Song
Harvard University, Brown University
J
Johannes Herter
Harvard University, ETH Zürich
Minghan Qin
Minghan Qin
Bytedance Research | Tsinghua University
Computer Vision3D Vision3D Scene Perception
G
Gao Huang
Tsinghua University
Hanspeter Pfister
Hanspeter Pfister
An Wang Professor of Computer Science, Harvard University
VisualizationComputer GraphicsComputer Vision