View-Consistent Diffusion Representations for 3D-Consistent Video Generation

πŸ“… 2025-11-24
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing video generation methods suffer from object deformation and visual artifacts under camera motion due to inconsistent 3D representations, severely limiting realism in simulation, gaming, and film production. To address this, we propose ViCoDRβ€”a video diffusion generation framework explicitly designed for 3D consistency. ViCoDR learns multi-view-aligned implicit diffusion representations by jointly incorporating camera pose priors and cross-view representation consistency constraints. Its core innovation lies in uncovering and modeling the strong correlation between representation consistency and generation quality within the diffusion latent space, enabling unified view-consistency optimization across three paradigms: image-to-video, text-to-video, and multi-view generation. Extensive experiments demonstrate that ViCoDR significantly suppresses structural distortions during viewpoint transitions and achieves state-of-the-art improvements in 3D consistency metrics across multiple benchmarks.

Technology Category

Application Category

πŸ“ Abstract
Video generation models have made significant progress in generating realistic content, enabling applications in simulation, gaming, and film making. However, current generated videos still contain visual artifacts arising from 3D inconsistencies, e.g., objects and structures deforming under changes in camera pose, which can undermine user experience and simulation fidelity. Motivated by recent findings on representation alignment for diffusion models, we hypothesize that improving the multi-view consistency of video diffusion representations will yield more 3D-consistent video generation. Through detailed analysis on multiple recent camera-controlled video diffusion models we reveal strong correlations between 3D-consistent representations and videos. We also propose ViCoDR, a new approach for improving the 3D consistency of video models by learning multi-view consistent diffusion representations. We evaluate ViCoDR on camera controlled image-to-video, text-to-video, and multi-view generation models, demonstrating significant improvements in the 3D consistency of the generated videos. Project page: https://danier97.github.io/ViCoDR.
Problem

Research questions and friction points this paper is trying to address.

Eliminating 3D inconsistencies causing object deformation in videos
Improving multi-view consistency of video diffusion representations
Reducing visual artifacts from camera pose changes in generated videos
Innovation

Methods, ideas, or system contributions that make the work stand out.

Learning multi-view consistent diffusion representations
Improving 3D consistency in video generation
Enhancing camera-controlled video diffusion models
πŸ”Ž Similar Papers
No similar papers found.