Pixel-to-4D: Camera-Controlled Image-to-Video Generation with Dynamic 3D Gaussians

๐Ÿ“… 2026-01-02
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing image-to-video generation methods suffer from limitations in camera trajectory control, temporal consistency, and geometric completeness. This work proposes an end-to-end framework based on dynamic 3D Gaussian splatting, whichโ€” for the first timeโ€”employs dynamic 3D Gaussian representations for single-image-driven video synthesis. The method jointly models camera motion and object dynamics within a single forward pass. By leveraging an explicit 3D scene representation, a motion sampling mechanism conditioned on a single input image, and differentiable rendering guided by prescribed camera trajectories, it achieves efficient, controllable, and temporally coherent video generation. Experiments on KITTI, Waymo, RealEstate10K, and DL3DV-10K demonstrate that the proposed approach significantly outperforms existing methods in both video quality and inference efficiency.

Technology Category

Application Category

๐Ÿ“ Abstract
Humans excel at forecasting the future dynamics of a scene given just a single image. Video generation models that can mimic this ability are an essential component for intelligent systems. Recent approaches have improved temporal coherence and 3D consistency in single-image-conditioned video generation. However, these methods often lack robust user controllability, such as modifying the camera path, limiting their applicability in real-world applications. Most existing camera-controlled image-to-video models struggle with accurately modeling camera motion, maintaining temporal consistency, and preserving geometric integrity. Leveraging explicit intermediate 3D representations offers a promising solution by enabling coherent video generation aligned with a given camera trajectory. Although these methods often use 3D point clouds to render scenes and introduce object motion in a later stage, this two-step process still falls short in achieving full temporal consistency, despite allowing precise control over camera movement. We propose a novel framework that constructs a 3D Gaussian scene representation and samples plausible object motion, given a single image in a single forward pass. This enables fast, camera-guided video generation without the need for iterative denoising to inject object motion into render frames. Extensive experiments on the KITTI, Waymo, RealEstate10K and DL3DV-10K datasets demonstrate that our method achieves state-of-the-art video quality and inference efficiency. The project page is available at https://melonienimasha.github.io/Pixel-to-4D-Website.
Problem

Research questions and friction points this paper is trying to address.

camera-controlled video generation
temporal consistency
3D consistency
geometric integrity
single-image-conditioned video generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

3D Gaussian representation
camera-controlled video generation
single-image-to-video
temporal consistency
object motion modeling
๐Ÿ”Ž Similar Papers
No similar papers found.