๐ค AI Summary
This work addresses the challenging problem of generating long-duration dynamic videos from a single image, conditioned on arbitrary camera trajectoriesโa task hindered by weak 3D awareness in existing methods, leading to geometric distortions and inaccurate motion modeling. We propose a two-stage framework: (1) a geometry-aware initialization stage that reconstructs a 3D point cloud and leverages a video diffusion model to generate temporally coherent, geometrically consistent video priors; and (2) a refinement stage incorporating cross-view consistency optimization, early-stopping, and view-filling strategies to enhance temporal stability, while integrating a multimodal large language model (MLLM) to parse and drive plausible object-level dynamics. To our knowledge, this is the first approach to employ video diffusion models for persistent, 3D-aware scene evolution. Extensive experiments demonstrate significant improvements over state-of-the-art methods in visual coherence, 3D geometric fidelity, and motion plausibility.
๐ Abstract
Perpetual view generation aims to synthesize a long-term video corresponding to an arbitrary camera trajectory solely from a single input image. Recent methods commonly utilize a pre-trained text-to-image diffusion model to synthesize new content of previously unseen regions along camera movement. However, the underlying 2D diffusion model lacks 3D awareness and results in distorted artifacts. Moreover, they are limited to generating views of static 3D scenes, neglecting to capture object movements within the dynamic 4D world. To alleviate these issues, we present DreamJourney, a two-stage framework that leverages the world simulation capacity of video diffusion models to trigger a new perpetual scene view generation task with both camera movements and object dynamics. Specifically, in stage I, DreamJourney first lifts the input image to 3D point cloud and renders a sequence of partial images from a specific camera trajectory. A video diffusion model is then utilized as generative prior to complete the missing regions and enhance visual coherence across the sequence, producing a cross-view consistent video adheres to the 3D scene and camera trajectory. Meanwhile, we introduce two simple yet effective strategies (early stopping and view padding) to further stabilize the generation process and improve visual quality. Next, in stage II, DreamJourney leverages a multimodal large language model to produce a text prompt describing object movements in current view, and uses video diffusion model to animate current view with object movements. Stage I and II are repeated recurrently, enabling perpetual dynamic scene view generation. Extensive experiments demonstrate the superiority of our DreamJourney over state-of-the-art methods both quantitatively and qualitatively. Our project page: https://dream-journey.vercel.app.