🤖 AI Summary
Existing co-speech gesture generation methods assume conditional independence among multimodal inputs (speech, text, and motion), limiting their ability to simultaneously ensure gesture diversity and temporal-semantic coherence. To address this, we propose a heterogeneous topology-driven tri-modal entanglement mechanism that explicitly models asymmetric dependencies among gesture dynamics, audio prosody, and textual semantics. Furthermore, we design a reprogrammable cross-modal semantic alignment module, integrating spatiotemporal graph neural networks with topological embeddings to enable joint entanglement learning. Evaluated on multiple benchmarks, our approach achieves state-of-the-art performance, significantly improving gesture naturalness, temporal coordination with speech, and semantic fidelity. It supports high-fidelity, diverse, and real-time gesture synthesis while preserving fine-grained alignment across modalities.
📝 Abstract
Co-speech gestures are crucial non-verbal cues that enhance speech clarity and expressiveness in human communication, which have attracted increasing attention in multimodal research. While the existing methods have made strides in gesture accuracy, challenges remain in generating diverse and coherent gestures, as most approaches assume independence among multimodal inputs and lack explicit modeling of their interactions. In this work, we propose a novel multimodal learning method named HOP for co-speech gesture generation that captures the heterogeneous entanglement between gesture motion, audio rhythm, and text semantics, enabling the generation of coordinated gestures. By leveraging spatiotemporal graph modeling, we achieve the alignment of audio and action. Moreover, to enhance modality coherence, we build the audio-text semantic representation based on a reprogramming module, which is beneficial for cross-modality adaptation. Our approach enables the trimodal system to learn each other's features and represent them in the form of topological entanglement. Extensive experiments demonstrate that HOP achieves state-of-the-art performance, offering more natural and expressive co-speech gesture generation. More information, codes, and demos are available here: https://star-uu-wang.github.io/HOP/