๐ค AI Summary
Existing image-to-video generation methods suffer from limitations in camera trajectory control, temporal consistency, and geometric completeness. This work proposes an end-to-end framework based on dynamic 3D Gaussian splatting, whichโ for the first timeโemploys dynamic 3D Gaussian representations for single-image-driven video synthesis. The method jointly models camera motion and object dynamics within a single forward pass. By leveraging an explicit 3D scene representation, a motion sampling mechanism conditioned on a single input image, and differentiable rendering guided by prescribed camera trajectories, it achieves efficient, controllable, and temporally coherent video generation. Experiments on KITTI, Waymo, RealEstate10K, and DL3DV-10K demonstrate that the proposed approach significantly outperforms existing methods in both video quality and inference efficiency.
๐ Abstract
Humans excel at forecasting the future dynamics of a scene given just a single image. Video generation models that can mimic this ability are an essential component for intelligent systems. Recent approaches have improved temporal coherence and 3D consistency in single-image-conditioned video generation. However, these methods often lack robust user controllability, such as modifying the camera path, limiting their applicability in real-world applications. Most existing camera-controlled image-to-video models struggle with accurately modeling camera motion, maintaining temporal consistency, and preserving geometric integrity. Leveraging explicit intermediate 3D representations offers a promising solution by enabling coherent video generation aligned with a given camera trajectory. Although these methods often use 3D point clouds to render scenes and introduce object motion in a later stage, this two-step process still falls short in achieving full temporal consistency, despite allowing precise control over camera movement. We propose a novel framework that constructs a 3D Gaussian scene representation and samples plausible object motion, given a single image in a single forward pass. This enables fast, camera-guided video generation without the need for iterative denoising to inject object motion into render frames. Extensive experiments on the KITTI, Waymo, RealEstate10K and DL3DV-10K datasets demonstrate that our method achieves state-of-the-art video quality and inference efficiency. The project page is available at https://melonienimasha.github.io/Pixel-to-4D-Website.