🤖 AI Summary
This paper addresses two key challenges in Multi-Concept Video Customization (MCVC): severe identity confusion and scarcity of high-quality video-entity pairs. To tackle these, we propose a diffusion Transformer-based disentangled generation framework. Methodologically: (1) we introduce the first disentangled multi-concept embedding injection mechanism, enabling simultaneous strong identity separation and concept fidelity preservation; (2) we design an automated pipeline for constructing video-entity pairs to systematically alleviate data scarcity; (3) we establish a multi-dimensional evaluation benchmark covering concept fidelity, identity disentanglement, and generation quality. Experiments across six complex concept-combination scenarios demonstrate that our method consistently outperforms state-of-the-art approaches, significantly improving semantic accuracy and visual distinguishability in multi-subject videos. It enables stable, high-fidelity customization even for highly similar concepts—without test-time fine-tuning.
📝 Abstract
Text-to-video generation has made remarkable advancements through diffusion models. However, Multi-Concept Video Customization (MCVC) remains a significant challenge. We identify two key challenges in this task: 1) the identity decoupling problem, where directly adopting existing customization methods inevitably mix attributes when handling multiple concepts simultaneously, and 2) the scarcity of high-quality video-entity pairs, which is crucial for training such a model that represents and decouples various concepts well. To address these challenges, we introduce ConceptMaster, an innovative framework that effectively tackles the critical issues of identity decoupling while maintaining concept fidelity in customized videos. Specifically, we introduce a novel strategy of learning decoupled multi-concept embeddings that are injected into the diffusion models in a standalone manner, which effectively guarantees the quality of customized videos with multiple identities, even for highly similar visual concepts. To further overcome the scarcity of high-quality MCVC data, we carefully establish a data construction pipeline, which enables systematic collection of precise multi-concept video-entity data across diverse concepts. A comprehensive benchmark is designed to validate the effectiveness of our model from three critical dimensions: concept fidelity, identity decoupling ability, and video generation quality across six different concept composition scenarios. Extensive experiments demonstrate that our ConceptMaster significantly outperforms previous approaches for this task, paving the way for generating personalized and semantically accurate videos across multiple concepts.