DreamJourney: Perpetual View Generation with Video Diffusion Models

๐Ÿ“… 2025-06-21
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the challenging problem of generating long-duration dynamic videos from a single image, conditioned on arbitrary camera trajectoriesโ€”a task hindered by weak 3D awareness in existing methods, leading to geometric distortions and inaccurate motion modeling. We propose a two-stage framework: (1) a geometry-aware initialization stage that reconstructs a 3D point cloud and leverages a video diffusion model to generate temporally coherent, geometrically consistent video priors; and (2) a refinement stage incorporating cross-view consistency optimization, early-stopping, and view-filling strategies to enhance temporal stability, while integrating a multimodal large language model (MLLM) to parse and drive plausible object-level dynamics. To our knowledge, this is the first approach to employ video diffusion models for persistent, 3D-aware scene evolution. Extensive experiments demonstrate significant improvements over state-of-the-art methods in visual coherence, 3D geometric fidelity, and motion plausibility.

Technology Category

Application Category

๐Ÿ“ Abstract
Perpetual view generation aims to synthesize a long-term video corresponding to an arbitrary camera trajectory solely from a single input image. Recent methods commonly utilize a pre-trained text-to-image diffusion model to synthesize new content of previously unseen regions along camera movement. However, the underlying 2D diffusion model lacks 3D awareness and results in distorted artifacts. Moreover, they are limited to generating views of static 3D scenes, neglecting to capture object movements within the dynamic 4D world. To alleviate these issues, we present DreamJourney, a two-stage framework that leverages the world simulation capacity of video diffusion models to trigger a new perpetual scene view generation task with both camera movements and object dynamics. Specifically, in stage I, DreamJourney first lifts the input image to 3D point cloud and renders a sequence of partial images from a specific camera trajectory. A video diffusion model is then utilized as generative prior to complete the missing regions and enhance visual coherence across the sequence, producing a cross-view consistent video adheres to the 3D scene and camera trajectory. Meanwhile, we introduce two simple yet effective strategies (early stopping and view padding) to further stabilize the generation process and improve visual quality. Next, in stage II, DreamJourney leverages a multimodal large language model to produce a text prompt describing object movements in current view, and uses video diffusion model to animate current view with object movements. Stage I and II are repeated recurrently, enabling perpetual dynamic scene view generation. Extensive experiments demonstrate the superiority of our DreamJourney over state-of-the-art methods both quantitatively and qualitatively. Our project page: https://dream-journey.vercel.app.
Problem

Research questions and friction points this paper is trying to address.

Generates long-term videos from single images with camera movement
Addresses 3D awareness and distortion in view synthesis
Captures dynamic object movements in 4D scenes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages video diffusion for dynamic scene generation
Uses 3D point cloud for cross-view consistency
Integrates large language model for object dynamics
B
Bo Pan
State Key Lab of CAD&CG, Zhejiang University
Y
Yang Chen
HiDream.ai
Yingwei Pan
Yingwei Pan
HiDream.ai
Computer VisionVision and LanguageVideo Analytics
T
Ting Yao
HiDream.ai
W
Wei Chen
State Key Lab of CAD&CG, Zhejiang University; Laboratory of Art and Archaeology Image
Tao Mei
Tao Mei
HiDream.ai; Fellow of CAE/IEEE/IAPR/CAAI
Multimedia AnalysisComputer VisionGenerative AIArtificial Intelligence