🤖 AI Summary
To address the limitations of conventional video synthesis—namely, its heavy reliance on manual intervention, lengthy production cycles, and high costs—this paper proposes an interactive generative video composition framework. The method enables adaptive injection of foreground identity and motion cues into a target background video, while supporting user-controllable adjustments of dynamic element attributes such as scale and trajectory. Technically, we introduce a lightweight background-preserving branch, a DiT-based fusion module, and an Extended Rotational Positional Encoding (ERoPE), alongside masked token injection, full self-attention fusion, and foreground enhancement strategies. We also construct VideoComp, the first large-scale video composition dataset comprising 61K samples. Extensive experiments demonstrate that our approach significantly outperforms state-of-the-art methods in visual fidelity and spatiotemporal consistency, substantially improving creative efficiency.
📝 Abstract
Video compositing combines live-action footage to create video production, serving as a crucial technique in video creation and film production. Traditional pipelines require intensive labor efforts and expert collaboration, resulting in lengthy production cycles and high manpower costs. To address this issue, we automate this process with generative models, called generative video compositing. This new task strives to adaptively inject identity and motion information of foreground video to the target video in an interactive manner, allowing users to customize the size, motion trajectory, and other attributes of the dynamic elements added in final video. Specifically, we designed a novel Diffusion Transformer (DiT) pipeline based on its intrinsic properties. To maintain consistency of the target video before and after editing, we revised a light-weight DiT-based background preservation branch with masked token injection. As to inherit dynamic elements from other sources, a DiT fusion block is proposed using full self-attention, along with a simple yet effective foreground augmentation for training. Besides, for fusing background and foreground videos with different layouts based on user control, we developed a novel position embedding, named Extended Rotary Position Embedding (ERoPE). Finally, we curated a dataset comprising 61K sets of videos for our new task, called VideoComp. This data includes complete dynamic elements and high-quality target videos. Experiments demonstrate that our method effectively realizes generative video compositing, outperforming existing possible solutions in fidelity and consistency.