🤖 AI Summary
To address two key challenges in human-centric video generation (HCVG)—scarcity of high-quality multimodal paired data and the difficulty of jointly optimizing subject consistency and audiovisual synchronization—this paper proposes a unified multimodal co-generation framework. Methodologically: (1) we construct the first high-fidelity text–image–audio triplet dataset; (2) we introduce a two-stage progressive training paradigm, integrating minimally invasive image injection and a “prediction-focused” strategy to decouple and jointly optimize subject identity preservation and lip-sync accuracy; (3) building upon diffusion models, we incorporate cross-modal attention, time-adaptive classifier-free guidance, and modular training for fine-grained multimodal control. Experiments demonstrate that our approach consistently outperforms existing specialized models in visual fidelity, lip-sync precision, and prompt adherence—achieving, for the first time, unified, controllable generation of human videos from text, image, and audio inputs.
📝 Abstract
Human-Centric Video Generation (HCVG) methods seek to synthesize human videos from multimodal inputs, including text, image, and audio. Existing methods struggle to effectively coordinate these heterogeneous modalities due to two challenges: the scarcity of training data with paired triplet conditions and the difficulty of collaborating the sub-tasks of subject preservation and audio-visual sync with multimodal inputs. In this work, we present HuMo, a unified HCVG framework for collaborative multimodal control. For the first challenge, we construct a high-quality dataset with diverse and paired text, reference images, and audio. For the second challenge, we propose a two-stage progressive multimodal training paradigm with task-specific strategies. For the subject preservation task, to maintain the prompt following and visual generation abilities of the foundation model, we adopt the minimal-invasive image injection strategy. For the audio-visual sync task, besides the commonly adopted audio cross-attention layer, we propose a focus-by-predicting strategy that implicitly guides the model to associate audio with facial regions. For joint learning of controllabilities across multimodal inputs, building on previously acquired capabilities, we progressively incorporate the audio-visual sync task. During inference, for flexible and fine-grained multimodal control, we design a time-adaptive Classifier-Free Guidance strategy that dynamically adjusts guidance weights across denoising steps. Extensive experimental results demonstrate that HuMo surpasses specialized state-of-the-art methods in sub-tasks, establishing a unified framework for collaborative multimodal-conditioned HCVG. Project Page: https://phantom-video.github.io/HuMo.