4DGT: Learning a 4D Gaussian Transformer Using Real-World Monocular Videos

📅 2025-06-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses 4D reconstruction of dynamic scenes from monocular video, tackling the challenges of jointly modeling static and dynamic components and representing objects with variable lifespans. We propose a Transformer-based model grounded in a 4D Gaussian prior, where 4D Gaussians serve as spatiotemporal inductive biases. The method integrates rolling-window temporal modeling and density-adaptive control to enable efficient long-sequence processing and real-time rendering—eliminating iterative optimization in favor of pure feed-forward inference, reducing reconstruction time from hours to seconds. It requires only raw monocular video and corresponding camera poses, and is trained end-to-end. Experiments demonstrate: (i) superior performance over state-of-the-art Gaussian-based methods on real-world videos; (ii) accuracy competitive with traditional optimization-based approaches on cross-domain videos; and (iii) robust scalability to continuous 64-frame inputs.

Technology Category

Application Category

📝 Abstract
We propose 4DGT, a 4D Gaussian-based Transformer model for dynamic scene reconstruction, trained entirely on real-world monocular posed videos. Using 4D Gaussian as an inductive bias, 4DGT unifies static and dynamic components, enabling the modeling of complex, time-varying environments with varying object lifespans. We proposed a novel density control strategy in training, which enables our 4DGT to handle longer space-time input and remain efficient rendering at runtime. Our model processes 64 consecutive posed frames in a rolling-window fashion, predicting consistent 4D Gaussians in the scene. Unlike optimization-based methods, 4DGT performs purely feed-forward inference, reducing reconstruction time from hours to seconds and scaling effectively to long video sequences. Trained only on large-scale monocular posed video datasets, 4DGT can outperform prior Gaussian-based networks significantly in real-world videos and achieve on-par accuracy with optimization-based methods on cross-domain videos. Project page: https://4dgt.github.io
Problem

Research questions and friction points this paper is trying to address.

Reconstruct dynamic scenes from monocular videos efficiently
Unify static and dynamic components in 4D modeling
Achieve real-time rendering with feed-forward inference
Innovation

Methods, ideas, or system contributions that make the work stand out.

4D Gaussian Transformer for dynamic scenes
Novel density control for efficient rendering
Feed-forward inference reduces reconstruction time
🔎 Similar Papers
No similar papers found.