🤖 AI Summary
This paper introduces DynaScene—the first explicit camera-pose-driven framework for dynamic background generation, addressing three key challenges in compositing portrait foreground videos with reference scene images: misalignment of background motion, unnatural generation of novel regions, and temporal inconsistency in texture. Methodologically, it incorporates camera pose encoding and multi-task learning to jointly optimize background outpainting and scene variation as auxiliary tasks; employs diffusion models for high-fidelity dynamic synthesis; and integrates accurate pose estimation with style transfer. Contributions include: (1) the first pose-explicit paradigm for dynamic scene generation; (2) the largest high-quality benchmark dataset to date, comprising 200K video segments; and (3) state-of-the-art performance on real human video benchmarks—significantly outperforming static and interpolation-based baselines in visual quality, motion coherence, and generalization.
📝 Abstract
In this paper, we investigate the generation of new video backgrounds given a human foreground video, a camera pose, and a reference scene image. This task presents three key challenges. First, the generated background should precisely follow the camera movements corresponding to the human foreground. Second, as the camera shifts in different directions, newly revealed content should appear seamless and natural. Third, objects within the video frame should maintain consistent textures as the camera moves to ensure visual coherence. To address these challenges, we propose DynaScene, a new framework that uses camera poses extracted from the original video as an explicit control to drive background motion. Specifically, we design a multi-task learning paradigm that incorporates auxiliary tasks, namely background outpainting and scene variation, to enhance the realism of the generated backgrounds. Given the scarcity of suitable data, we constructed a large-scale, high-quality dataset tailored for this task, comprising video foregrounds, reference scene images, and corresponding camera poses. This dataset contains 200K video clips, ten times larger than existing real-world human video datasets, providing a significantly richer and more diverse training resource. Project page: https://yaomingshuai.github.io/Beyond-Static-Scenes.github.io/