🤖 AI Summary
Existing video diffusion models struggle to disentangle scene dynamics from camera motion, resulting in coupled and imprecise spatiotemporal control. To address this, we propose the first 4D controllable video generation framework that explicitly decouples these two factors. Our approach introduces a continuous 4D positional encoding to model spatiotemporal coordinates and designs an adaptive normalization mechanism that independently conditions attention layers on scene dynamics (world time series) and camera poses (trajectory parameters). To support training, we curate a high-quality dataset of disentangled scene and camera parameters. Experiments demonstrate robust 4D controllability across diverse temporal patterns and camera trajectories, achieving superior visual fidelity and significantly enhanced spatiotemporal precision compared to state-of-the-art methods.
📝 Abstract
Emerging video diffusion models achieve high visual fidelity but fundamentally couple scene dynamics with camera motion, limiting their ability to provide precise spatial and temporal control. We introduce a 4D-controllable video diffusion framework that explicitly decouples scene dynamics from camera pose, enabling fine-grained manipulation of both scene dynamics and camera viewpoint. Our framework takes continuous world-time sequences and camera trajectories as conditioning inputs, injecting them into the video diffusion model through a 4D positional encoding in the attention layer and adaptive normalizations for feature modulation. To train this model, we curate a unique dataset in which temporal and camera variations are independently parameterized; this dataset will be made public. Experiments show that our model achieves robust real-world 4D control across diverse timing patterns and camera trajectories, while preserving high generation quality and outperforming prior work in controllability. See our website for video results: https://19reborn.github.io/Bullet4D/