🤖 AI Summary
This work addresses the challenging problem of 4D dynamic reconstruction from uncalibrated monocular video sequences depicting non-static scenes. We propose the first monocular 4D Gaussian flow method that requires neither multi-view inputs, known camera parameters, nor static-scene assumptions. Our approach leverages 2D depth and optical flow priors to guide Gaussian point initialization and pixel-level densification. It jointly optimizes video segmentation masks, camera poses, and Gaussian point dynamics in an alternating fashion, augmented by spatiotemporal consistency regularization. The method enables precise per-point tracking, motion-aware object segmentation, frame-wise camera localization, and high-fidelity novel-view synthesis. Crucially, it delivers a unified 4D representation—comprising time-varying geometry, appearance, and motion—that supports both scene-level and object-level editing. Experimental results demonstrate robust performance under unconstrained real-world conditions, establishing a new foundation for monocular dynamic scene understanding and manipulation.
📝 Abstract
Recovering 4D world from monocular video is a crucial yet challenging task. Conventional methods usually rely on the assumptions of multi-view videos, known camera parameters, or static scenes. In this paper, we relax all these constraints and tackle a highly ambitious but practical task: With only one monocular video without camera parameters, we aim to recover the dynamic 3D world alongside the camera poses. To solve this, we introduce GFlow, a new framework that utilizes only 2D priors (depth and optical flow) to lift a video to a 4D scene, as a flow of 3D Gaussians through space and time. GFlow starts by segmenting the video into still and moving parts, then alternates between optimizing camera poses and the dynamics of the 3D Gaussian points. This method ensures consistency among adjacent points and smooth transitions between frames. Since dynamic scenes always continually introduce new visual content, we present prior-driven initialization and pixel-wise densification strategy for Gaussian points to integrate new content. By combining all those techniques, GFlow transcends the boundaries of 4D recovery from causal videos; it naturally enables tracking of points and segmentation of moving objects across frames. Additionally, GFlow estimates the camera poses for each frame, enabling novel view synthesis by changing camera pose. This capability facilitates extensive scene-level or object-level editing, highlighting GFlow's versatility and effectiveness. Visit our project page at: https://littlepure2333.github.io/GFlow