ViDAR: Video Diffusion-Aware 4D Reconstruction From Monocular Inputs

📅 2025-06-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Dynamic novel view synthesis aims to synthesize photorealistic images of moving objects from monocular videos but faces challenges in structural-motion disentanglement and sparse supervision. This paper introduces ViDAR: first, a personalized diffusion model generates scene-adaptive pseudo-multi-view supervision to alleviate geometric ambiguity inherent in monocular inputs; second, a diffusion-aware loss function is designed, and a 4D Gaussian point-based representation is jointly optimized with camera poses to enhance spatiotemporal consistency and detail fidelity in dynamic regions. Evaluated on benchmarks including DyCheck, ViDAR significantly outperforms prior methods—particularly under large viewpoint variations and complex motion—achieving superior visual quality and geometric consistency.

Technology Category

Application Category

📝 Abstract
Dynamic Novel View Synthesis aims to generate photorealistic views of moving subjects from arbitrary viewpoints. This task is particularly challenging when relying on monocular video, where disentangling structure from motion is ill-posed and supervision is scarce. We introduce Video Diffusion-Aware Reconstruction (ViDAR), a novel 4D reconstruction framework that leverages personalised diffusion models to synthesise a pseudo multi-view supervision signal for training a Gaussian splatting representation. By conditioning on scene-specific features, ViDAR recovers fine-grained appearance details while mitigating artefacts introduced by monocular ambiguity. To address the spatio-temporal inconsistency of diffusion-based supervision, we propose a diffusion-aware loss function and a camera pose optimisation strategy that aligns synthetic views with the underlying scene geometry. Experiments on DyCheck, a challenging benchmark with extreme viewpoint variation, show that ViDAR outperforms all state-of-the-art baselines in visual quality and geometric consistency. We further highlight ViDAR's strong improvement over baselines on dynamic regions and provide a new benchmark to compare performance in reconstructing motion-rich parts of the scene. Project page: https://vidar-4d.github.io
Problem

Research questions and friction points this paper is trying to address.

Dynamic novel view synthesis from monocular video inputs
Disentangling structure from motion with scarce supervision
Mitigating artefacts from monocular ambiguity in 4D reconstruction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses personalised diffusion models for pseudo multi-view supervision
Employs Gaussian splatting representation for 4D reconstruction
Introduces diffusion-aware loss and pose optimisation
🔎 Similar Papers