🤖 AI Summary
To address the challenges of heavy reliance on high-fidelity motion data, poor generalization, and high interaction barriers in human–object interaction (HOI) video generation, this paper proposes the first weakly conditioned multimodal framework for generalizable HOI animation synthesis guided by sparse actions. Methodologically, it introduces: (1) a decoupled sparse motion guidance mechanism to reduce dependence on dense motion annotations; (2) a parameter-space HOI adapter and a facial cross-attention adapter, jointly ensuring physical plausibility and audio-driven lip-sync accuracy; and (3) an MMDiT-based architecture integrating dual input-space encoding, shared context fusion, and lightweight adapter fine-tuning. Experiments demonstrate state-of-the-art performance in naturalness and cross-object/scene generalization. The framework supports text-to-video generation, real-time object interaction, and interactive visualization—significantly enhancing the practicality and accessibility of HOI video generation.
📝 Abstract
To address key limitations in human-object interaction (HOI) video generation -- specifically the reliance on curated motion data, limited generalization to novel objects/scenarios, and restricted accessibility -- we introduce HunyuanVideo-HOMA, a weakly conditioned multimodal-driven framework. HunyuanVideo-HOMA enhances controllability and reduces dependency on precise inputs through sparse, decoupled motion guidance. It encodes appearance and motion signals into the dual input space of a multimodal diffusion transformer (MMDiT), fusing them within a shared context space to synthesize temporally consistent and physically plausible interactions. To optimize training, we integrate a parameter-space HOI adapter initialized from pretrained MMDiT weights, preserving prior knowledge while enabling efficient adaptation, and a facial cross-attention adapter for anatomically accurate audio-driven lip synchronization. Extensive experiments confirm state-of-the-art performance in interaction naturalness and generalization under weak supervision. Finally, HunyuanVideo-HOMA demonstrates versatility in text-conditioned generation and interactive object manipulation, supported by a user-friendly demo interface. The project page is at https://anonymous.4open.science/w/homa-page-0FBE/.