🤖 AI Summary
This work addresses the underexplored challenge of multi-subject customized text-to-video generation. Methodologically, we propose the first framework enabling simultaneous customization of multiple subjects while ensuring temporal coherence and high visual fidelity: (1) we design latent-space motion modeling coupled with cross-frame temporal attention; (2) we introduce Disen-Mix, a disentangled fine-tuning strategy that mitigates attribute entanglement among multiple subjects; and (3) we incorporate human feedback-driven re-fine-tuning for enhanced alignment with perceptual quality. Our contributions include: (i) establishing MultiStudioBench—the first dedicated benchmark for multi-subject video generation; (ii) achieving significant performance gains over state-of-the-art single-subject transfer methods on this benchmark; and (iii) successfully generating high-fidelity videos featuring novel events, unseen backgrounds, and complex multi-subject interactions.
📝 Abstract
Customized text-to-video generation aims to generate text-guided videos with customized user-given subjects, which has gained increasing attention recently. However, existing works are primarily limited to generating videos for a single subject, leaving the more challenging problem of customized multi-subject text-to-video generation largely unexplored. In this paper, we fill this gap and propose a novel VideoDreamer framework. VideoDreamer can generate temporally consistent text-guided videos that faithfully preserve the visual features of the given multiple subjects. Specifically, VideoDreamer leverages the pretrained Stable Diffusion with latent-code motion dynamics and temporal cross-frame attention as the base video generator. The video generator is further customized for the given multiple subjects by the proposed Disen-Mix Finetuning and Human-in-the-Loop Re-finetuning strategy, which can tackle the attribute binding problem of multi-subject generation. We also introduce MultiStudioBench, a benchmark for evaluating customized multi-subject text-to-video generation models. Extensive experiments demonstrate the remarkable ability of VideoDreamer to generate videos with new content such as new events and backgrounds, tailored to the customized multiple subjects. Our project page is available at https://videodreamer23.github.io/.