VideoMaker: Zero-shot Customized Video Generation with the Inherent Force of Video Diffusion Models

📅 2024-12-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge in zero-shot video diffusion models (VDMs) of simultaneously preserving subject (e.g., person or object) appearance consistency and ensuring generative diversity. We propose a customization method that requires neither auxiliary models nor exemplar frames. Our key insight is the first identification of VDMs’ intrinsic capability to model subject-specific features implicitly within their latent representations. To harness this, we design a bidirectional spatial self-attention mechanism that enables adaptive spatiotemporal injection and alignment of reference-image features into the VDM’s denoising process. Critically, our approach operates entirely within the original VDM architecture—guiding feature extraction and reweighting solely via the reference image—without introducing new parameters or fine-tuning. Experiments demonstrate substantial improvements in subject consistency, motion naturalness, and visual fidelity over state-of-the-art zero-shot methods on both portrait and object customization tasks.

Technology Category

Application Category

📝 Abstract
Zero-shot customized video generation has gained significant attention due to its substantial application potential. Existing methods rely on additional models to extract and inject reference subject features, assuming that the Video Diffusion Model (VDM) alone is insufficient for zero-shot customized video generation. However, these methods often struggle to maintain consistent subject appearance due to suboptimal feature extraction and injection techniques. In this paper, we reveal that VDM inherently possesses the force to extract and inject subject features. Departing from previous heuristic approaches, we introduce a novel framework that leverages VDM's inherent force to enable high-quality zero-shot customized video generation. Specifically, for feature extraction, we directly input reference images into VDM and use its intrinsic feature extraction process, which not only provides fine-grained features but also significantly aligns with VDM's pre-trained knowledge. For feature injection, we devise an innovative bidirectional interaction between subject features and generated content through spatial self-attention within VDM, ensuring that VDM has better subject fidelity while maintaining the diversity of the generated video. Experiments on both customized human and object video generation validate the effectiveness of our framework.
Problem

Research questions and friction points this paper is trying to address.

Video Diffusion Models
High-quality Custom Video Generation
Person-consistent Diversity
Innovation

Methods, ideas, or system contributions that make the work stand out.

VideoMaker method
Video Diffusion Model (VDM)
Customized video generation
🔎 Similar Papers
No similar papers found.