🤖 AI Summary
To address poor temporal consistency in zero-shot text-driven video editing, this paper proposes Video-3DGS—the first two-stage framework integrating 3D Gaussian Splatting (3DGS) into video refinement. Methodologically, it introduces a novel dual-branch 3DGS modeling tailored for dynamic monocular videos—decoupling foreground and background—and integrates MC-COLMAP for sparse reconstruction. Crucially, it pioneers the conversion of 3DGS reconstruction outputs into explicit spatiotemporal constraints for video diffusion models. Key innovations include mask-guided point cloud initialization, learnable 2D fusion mapping, and joint optimization. Evaluated on the DAVIS dataset, Video-3DGS achieves 3–7 dB PSNR gains in video reconstruction, trains 1.9–4.5× faster than state-of-the-art methods, and significantly improves temporal consistency across 58 diverse dynamic editing cases.
📝 Abstract
Recent advancements in zero-shot video diffusion models have shown promise for text-driven video editing, but challenges remain in achieving high temporal consistency. To address this, we introduce Video-3DGS, a 3D Gaussian Splatting (3DGS)-based video refiner designed to enhance temporal consistency in zero-shot video editors. Our approach utilizes a two-stage 3D Gaussian optimizing process tailored for editing dynamic monocular videos. In the first stage, Video-3DGS employs an improved version of COLMAP, referred to as MC-COLMAP, which processes original videos using a Masked and Clipped approach. For each video clip, MC-COLMAP generates the point clouds for dynamic foreground objects and complex backgrounds. These point clouds are utilized to initialize two sets of 3D Gaussians (Frg-3DGS and Bkg-3DGS) aiming to represent foreground and background views. Both foreground and background views are then merged with a 2D learnable parameter map to reconstruct full views. In the second stage, we leverage the reconstruction ability developed in the first stage to impose the temporal constraints on the video diffusion model. To demonstrate the efficacy of Video-3DGS on both stages, we conduct extensive experiments across two related tasks: Video Reconstruction and Video Editing. Video-3DGS trained with 3k iterations significantly improves video reconstruction quality (+3 PSNR, +7 PSNR increase) and training efficiency (x1.9, x4.5 times faster) over NeRF-based and 3DGS-based state-of-art methods on DAVIS dataset, respectively. Moreover, it enhances video editing by ensuring temporal consistency across 58 dynamic monocular videos.