HunyuanVideo-HOMA: Generic Human-Object Interaction in Multimodal Driven Human Animation

📅 2025-06-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of heavy reliance on high-fidelity motion data, poor generalization, and high interaction barriers in human–object interaction (HOI) video generation, this paper proposes the first weakly conditioned multimodal framework for generalizable HOI animation synthesis guided by sparse actions. Methodologically, it introduces: (1) a decoupled sparse motion guidance mechanism to reduce dependence on dense motion annotations; (2) a parameter-space HOI adapter and a facial cross-attention adapter, jointly ensuring physical plausibility and audio-driven lip-sync accuracy; and (3) an MMDiT-based architecture integrating dual input-space encoding, shared context fusion, and lightweight adapter fine-tuning. Experiments demonstrate state-of-the-art performance in naturalness and cross-object/scene generalization. The framework supports text-to-video generation, real-time object interaction, and interactive visualization—significantly enhancing the practicality and accessibility of HOI video generation.

Technology Category

Application Category

📝 Abstract
To address key limitations in human-object interaction (HOI) video generation -- specifically the reliance on curated motion data, limited generalization to novel objects/scenarios, and restricted accessibility -- we introduce HunyuanVideo-HOMA, a weakly conditioned multimodal-driven framework. HunyuanVideo-HOMA enhances controllability and reduces dependency on precise inputs through sparse, decoupled motion guidance. It encodes appearance and motion signals into the dual input space of a multimodal diffusion transformer (MMDiT), fusing them within a shared context space to synthesize temporally consistent and physically plausible interactions. To optimize training, we integrate a parameter-space HOI adapter initialized from pretrained MMDiT weights, preserving prior knowledge while enabling efficient adaptation, and a facial cross-attention adapter for anatomically accurate audio-driven lip synchronization. Extensive experiments confirm state-of-the-art performance in interaction naturalness and generalization under weak supervision. Finally, HunyuanVideo-HOMA demonstrates versatility in text-conditioned generation and interactive object manipulation, supported by a user-friendly demo interface. The project page is at https://anonymous.4open.science/w/homa-page-0FBE/.
Problem

Research questions and friction points this paper is trying to address.

Overcoming reliance on curated motion data for HOI video generation
Enhancing generalization to novel objects and interaction scenarios
Reducing dependency on precise inputs for controllable animation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal diffusion transformer for HOI synthesis
Weakly conditioned framework with sparse guidance
Parameter-space adapters for efficient training
🔎 Similar Papers
No similar papers found.
Ziyao Huang
Ziyao Huang
Institude of Computing Technology, CAS
Computer Vision
Z
Zixiang Zhou
Tencent Hunyuan
Juan Cao
Juan Cao
Professor of Mathematics, Xiamen University
Computer Aided Geometric DesignComputer Graphics
Yifeng Ma
Yifeng Ma
Tsinghua University
Computer visionDeep Learning
Y
Yi Chen
Tencent Hunyuan
Z
Zejing Rao
University of Chinese Academy of Sciences
Zhiyong Xu
Zhiyong Xu
Tencent Hunyuan
H
Hongmei Wang
Tencent Hunyuan
Q
Qin Lin
Tencent Hunyuan
Y
Yuan Zhou
Tencent Hunyuan
Q
Qinglin Lu
Tencent Hunyuan
F
Fan Tang
University of Chinese Academy of Sciences