VideoDreamer: Customized Multi-Subject Text-to-Video Generation with Disen-Mix Finetuning

📅 2023-11-02
🏛️ arXiv.org
📈 Citations: 28
Influential: 4
📄 PDF
🤖 AI Summary
This work addresses the underexplored challenge of multi-subject customized text-to-video generation. Methodologically, we propose the first framework enabling simultaneous customization of multiple subjects while ensuring temporal coherence and high visual fidelity: (1) we design latent-space motion modeling coupled with cross-frame temporal attention; (2) we introduce Disen-Mix, a disentangled fine-tuning strategy that mitigates attribute entanglement among multiple subjects; and (3) we incorporate human feedback-driven re-fine-tuning for enhanced alignment with perceptual quality. Our contributions include: (i) establishing MultiStudioBench—the first dedicated benchmark for multi-subject video generation; (ii) achieving significant performance gains over state-of-the-art single-subject transfer methods on this benchmark; and (iii) successfully generating high-fidelity videos featuring novel events, unseen backgrounds, and complex multi-subject interactions.
📝 Abstract
Customized text-to-video generation aims to generate text-guided videos with customized user-given subjects, which has gained increasing attention recently. However, existing works are primarily limited to generating videos for a single subject, leaving the more challenging problem of customized multi-subject text-to-video generation largely unexplored. In this paper, we fill this gap and propose a novel VideoDreamer framework. VideoDreamer can generate temporally consistent text-guided videos that faithfully preserve the visual features of the given multiple subjects. Specifically, VideoDreamer leverages the pretrained Stable Diffusion with latent-code motion dynamics and temporal cross-frame attention as the base video generator. The video generator is further customized for the given multiple subjects by the proposed Disen-Mix Finetuning and Human-in-the-Loop Re-finetuning strategy, which can tackle the attribute binding problem of multi-subject generation. We also introduce MultiStudioBench, a benchmark for evaluating customized multi-subject text-to-video generation models. Extensive experiments demonstrate the remarkable ability of VideoDreamer to generate videos with new content such as new events and backgrounds, tailored to the customized multiple subjects. Our project page is available at https://videodreamer23.github.io/.
Problem

Research questions and friction points this paper is trying to address.

Customized multi-subject text-to-video generation
Attribute binding in multi-subject video synthesis
Temporal consistency with customized subjects and motions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Disen-Mix Finetuning for multi-subject generation
Human-in-the-Loop Re-finetuning strategy
Disentangled motion customization for temporal modules
🔎 Similar Papers
No similar papers found.
H
Hong Chen
Department of Computer Science and Technology, Tsinghua University
X
Xin Wang
Department of Computer Science and Technology, Tsinghua University; Beijing National Research Center for Information Science and Technology, Tsinghua
G
Guanning Zeng
Department of Computer Science and Technology, Tsinghua University
Yipeng Zhang
Yipeng Zhang
Tsinghua University
Yuwei Zhou
Yuwei Zhou
Tsinghua University
Feilin Han
Feilin Han
Beijing Film Academy
Filmmaking TechnologyVirtual Reality
Wenwu Zhu
Wenwu Zhu
Professor, Computer Science, Tsinghua Univerisity
Multimedia ComputingNetwork Representation Learning