Beyond Static Scenes: Camera-controllable Background Generation for Human Motion

📅 2025-04-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper introduces DynaScene—the first explicit camera-pose-driven framework for dynamic background generation, addressing three key challenges in compositing portrait foreground videos with reference scene images: misalignment of background motion, unnatural generation of novel regions, and temporal inconsistency in texture. Methodologically, it incorporates camera pose encoding and multi-task learning to jointly optimize background outpainting and scene variation as auxiliary tasks; employs diffusion models for high-fidelity dynamic synthesis; and integrates accurate pose estimation with style transfer. Contributions include: (1) the first pose-explicit paradigm for dynamic scene generation; (2) the largest high-quality benchmark dataset to date, comprising 200K video segments; and (3) state-of-the-art performance on real human video benchmarks—significantly outperforming static and interpolation-based baselines in visual quality, motion coherence, and generalization.

Technology Category

Application Category

📝 Abstract
In this paper, we investigate the generation of new video backgrounds given a human foreground video, a camera pose, and a reference scene image. This task presents three key challenges. First, the generated background should precisely follow the camera movements corresponding to the human foreground. Second, as the camera shifts in different directions, newly revealed content should appear seamless and natural. Third, objects within the video frame should maintain consistent textures as the camera moves to ensure visual coherence. To address these challenges, we propose DynaScene, a new framework that uses camera poses extracted from the original video as an explicit control to drive background motion. Specifically, we design a multi-task learning paradigm that incorporates auxiliary tasks, namely background outpainting and scene variation, to enhance the realism of the generated backgrounds. Given the scarcity of suitable data, we constructed a large-scale, high-quality dataset tailored for this task, comprising video foregrounds, reference scene images, and corresponding camera poses. This dataset contains 200K video clips, ten times larger than existing real-world human video datasets, providing a significantly richer and more diverse training resource. Project page: https://yaomingshuai.github.io/Beyond-Static-Scenes.github.io/
Problem

Research questions and friction points this paper is trying to address.

Generate dynamic backgrounds matching human motion and camera movement
Ensure seamless content generation with camera shifts
Maintain texture consistency for visual coherence during camera motion
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses camera poses to control background motion
Multi-task learning enhances background realism
Large-scale dataset with 200K video clips
🔎 Similar Papers
No similar papers found.
M
Mingshuai Yao
Harbin Institute of Technology, Harbin, China
Mengting Chen
Mengting Chen
Alibaba Group
Generative ModelingComputer Vision
Q
Qinye Zhou
Taobao and Tmall Group
Y
Yabo Zhang
Harbin Institute of Technology, Harbin, China
M
Ming Liu
Harbin Institute of Technology, Harbin, China
X
Xiaoming Li
Harbin Institute of Technology, Harbin, China
S
Shaohui Liu
Harbin Institute of Technology, Harbin, China
Chen Ju
Chen Ju
Alibaba Group, Shanghai Jiao Tong University
Multi-Modal LearningAIGCData GovernanceVideo Understanding
Shuai Xiao
Shuai Xiao
Alibaba Group
Machine LearningArtificial IntelligenceInformation RetrievalMultimodal Models
Qingwen Liu
Qingwen Liu
Tongji University
Wireless NetworkingAI
J
Jinsong Lan
Taobao and Tmall Group
Wangmeng Zuo
Wangmeng Zuo
School of Computer Science and Technology, Harbin Institute of Technology
Computer VisionImage ProcessingGenerative AIDeep LearningBiometrics