HOP: Heterogeneous Topology-based Multimodal Entanglement for Co-Speech Gesture Generation

📅 2025-03-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing co-speech gesture generation methods assume conditional independence among multimodal inputs (speech, text, and motion), limiting their ability to simultaneously ensure gesture diversity and temporal-semantic coherence. To address this, we propose a heterogeneous topology-driven tri-modal entanglement mechanism that explicitly models asymmetric dependencies among gesture dynamics, audio prosody, and textual semantics. Furthermore, we design a reprogrammable cross-modal semantic alignment module, integrating spatiotemporal graph neural networks with topological embeddings to enable joint entanglement learning. Evaluated on multiple benchmarks, our approach achieves state-of-the-art performance, significantly improving gesture naturalness, temporal coordination with speech, and semantic fidelity. It supports high-fidelity, diverse, and real-time gesture synthesis while preserving fine-grained alignment across modalities.

Technology Category

Application Category

📝 Abstract
Co-speech gestures are crucial non-verbal cues that enhance speech clarity and expressiveness in human communication, which have attracted increasing attention in multimodal research. While the existing methods have made strides in gesture accuracy, challenges remain in generating diverse and coherent gestures, as most approaches assume independence among multimodal inputs and lack explicit modeling of their interactions. In this work, we propose a novel multimodal learning method named HOP for co-speech gesture generation that captures the heterogeneous entanglement between gesture motion, audio rhythm, and text semantics, enabling the generation of coordinated gestures. By leveraging spatiotemporal graph modeling, we achieve the alignment of audio and action. Moreover, to enhance modality coherence, we build the audio-text semantic representation based on a reprogramming module, which is beneficial for cross-modality adaptation. Our approach enables the trimodal system to learn each other's features and represent them in the form of topological entanglement. Extensive experiments demonstrate that HOP achieves state-of-the-art performance, offering more natural and expressive co-speech gesture generation. More information, codes, and demos are available here: https://star-uu-wang.github.io/HOP/
Problem

Research questions and friction points this paper is trying to address.

Generates diverse and coherent co-speech gestures
Models interactions among gesture, audio, and text
Enhances cross-modality coherence and alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Heterogeneous multimodal entanglement for gesture generation
Spatiotemporal graph modeling for audio-action alignment
Reprogramming module for audio-text semantic representation
🔎 Similar Papers
No similar papers found.
Hongye Cheng
Hongye Cheng
College of Mechanical and Electronic Engineering, Northwest A&F University
T
Tianyu Wang
Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University
Guangsi Shi
Guangsi Shi
Midea
Physics-guided AIAI for Science & EngineeringEmbodied AI
Z
Zexing Zhao
College of Information Engineering, Northwest A&F University
Yanwei Fu
Yanwei Fu
Fudan University
Computer visionmachine learningMultimedia