OmniHuman-1.5: Instilling an Active Mind in Avatars via Cognitive Simulation

📅 2025-08-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current video avatar models generate physically plausible animations but are limited to low-level audio-visual synchronization, lacking deep semantic understanding of emotion, intent, and context. To address this, we propose a cognition-driven virtual character generation framework. First, we employ a multimodal large language model (MMLM) to extract high-level semantics and guide animation generation. Second, we introduce a Pseudo Last Frame mechanism to enable cross-modal coordination and conflict mitigation within a multimodal diffusion Transformer (DiT) architecture. Third, we integrate joint audio-image-text encoding to ensure semantic consistency across modalities. Experiments demonstrate state-of-the-art performance in lip-sync accuracy, video quality, motion naturalness, and textual semantic fidelity. Moreover, our framework generalizes effectively to multi-character and non-human agent scenarios.

Technology Category

Application Category

📝 Abstract
Existing video avatar models can produce fluid human animations, yet they struggle to move beyond mere physical likeness to capture a character's authentic essence. Their motions typically synchronize with low-level cues like audio rhythm, lacking a deeper semantic understanding of emotion, intent, or context. To bridge this gap, extbf{we propose a framework designed to generate character animations that are not only physically plausible but also semantically coherent and expressive.} Our model, extbf{OmniHuman-1.5}, is built upon two key technical contributions. First, we leverage Multimodal Large Language Models to synthesize a structured textual representation of conditions that provides high-level semantic guidance. This guidance steers our motion generator beyond simplistic rhythmic synchronization, enabling the production of actions that are contextually and emotionally resonant. Second, to ensure the effective fusion of these multimodal inputs and mitigate inter-modality conflicts, we introduce a specialized Multimodal DiT architecture with a novel Pseudo Last Frame design. The synergy of these components allows our model to accurately interpret the joint semantics of audio, images, and text, thereby generating motions that are deeply coherent with the character, scene, and linguistic content. Extensive experiments demonstrate that our model achieves leading performance across a comprehensive set of metrics, including lip-sync accuracy, video quality, motion naturalness and semantic consistency with textual prompts. Furthermore, our approach shows remarkable extensibility to complex scenarios, such as those involving multi-person and non-human subjects. Homepage: href{https://omnihuman-lab.github.io/v1_5/}
Problem

Research questions and friction points this paper is trying to address.

Generating avatar animations with semantic coherence and expressiveness
Overcoming simplistic motion synchronization lacking emotional understanding
Ensuring multimodal fusion for contextual and character resonance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal Large Language Models for semantic guidance
Multimodal DiT architecture with Pseudo Last Frame
Generating contextually and emotionally resonant avatar motions
🔎 Similar Papers
No similar papers found.
J
Jianwen Jiang
Intelligent Creation Lab, ByteDance
W
Weihong Zeng
Intelligent Creation Lab, ByteDance
Zerong Zheng
Zerong Zheng
Bytedance
Computer VisionComputer Graphics
J
Jiaqi Yang
Intelligent Creation Lab, ByteDance
C
Chao Liang
Intelligent Creation Lab, ByteDance
Wang Liao
Wang Liao
Intelligent Creation Lab, ByteDance
H
Han Liang
Intelligent Creation Lab, ByteDance
Y
Yuan Zhang
Intelligent Creation Lab, ByteDance
Mingyuan Gao
Mingyuan Gao
Professor, Institute of Chemistry, Chinese Academy of Sciences