MoVieS: Motion-Aware 4D Dynamic View Synthesis in One Second

📅 2025-07-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work introduces the first unified 4D dynamic novel-view synthesis framework for monocular video, jointly modeling appearance, geometry, and motion to simultaneously support novel-view synthesis, 3D reconstruction, and point trajectory tracking. Methodologically, it employs a feed-forward network coupled with a pixel-aligned, time-varying Gaussian primitive grid; scene flow is explicitly supervised, and dynamic geometry and appearance are jointly optimized. It achieves zero-shot scene flow estimation and moving-object segmentation—without requiring task-specific annotations—for the first time. Evaluated on multiple benchmarks, the method attains state-of-the-art performance, with inference speed reaching 1 second per frame—1–2 orders of magnitude faster than existing approaches—enabling scalable training on large datasets and efficient deployment.

Technology Category

Application Category

📝 Abstract
We present MoVieS, a novel feed-forward model that synthesizes 4D dynamic novel views from monocular videos in one second. MoVieS represents dynamic 3D scenes using pixel-aligned grids of Gaussian primitives, explicitly supervising their time-varying motion. This allows, for the first time, the unified modeling of appearance, geometry and motion, and enables view synthesis, reconstruction and 3D point tracking within a single learning-based framework. By bridging novel view synthesis with dynamic geometry reconstruction, MoVieS enables large-scale training on diverse datasets with minimal dependence on task-specific supervision. As a result, it also naturally supports a wide range of zero-shot applications, such as scene flow estimation and moving object segmentation. Extensive experiments validate the effectiveness and efficiency of MoVieS across multiple tasks, achieving competitive performance while offering several orders of magnitude speedups.
Problem

Research questions and friction points this paper is trying to address.

Synthesizes 4D dynamic views from monocular videos
Unifies appearance, geometry, and motion modeling
Enables zero-shot applications like scene flow estimation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses pixel-aligned grids of Gaussian primitives
Unifies appearance, geometry, and motion modeling
Enables large-scale training with minimal supervision
🔎 Similar Papers
No similar papers found.