🤖 AI Summary
Existing gesture generation methods produce only beat-type, semantically void repetitive motions, failing to convey iconic or deictic gestures. This paper introduces the first vision-guided, language-driven gesture generation framework that synthesizes semantically consistent gestures in a zero-shot, unsupervised setting by jointly leveraging speech semantics and visual features—such as object shape and symmetry—extracted from input images. Our approach integrates visual analysis, cross-modal semantic alignment, and an inverse kinematics engine to ensure generated gestures faithfully encode visually grounded information not explicitly specified in speech. A user study demonstrates that our generated gestures significantly reduce semantic ambiguity, improving listeners’ accuracy in recognizing object attributes by 18.7%, thereby enhancing multimodal communicative capability and comprehensibility of virtual agents.
📝 Abstract
Human communication combines speech with expressive nonverbal cues such as hand gestures that serve manifold communicative functions. Yet, current generative gesture generation approaches are restricted to simple, repetitive beat gestures that accompany the rhythm of speaking but do not contribute to communicating semantic meaning. This paper tackles a core challenge in co-speech gesture synthesis: generating iconic or deictic gestures that are semantically coherent with a verbal utterance. Such gestures cannot be derived from language input alone, which inherently lacks the visual meaning that is often carried autonomously by gestures. We therefore introduce a zero-shot system that generates gestures from a given language input and additionally is informed by imagistic input, without manual annotation or human intervention. Our method integrates an image analysis pipeline that extracts key object properties such as shape, symmetry, and alignment, together with a semantic matching module that links these visual details to spoken text. An inverse kinematics engine then synthesizes iconic and deictic gestures and combines them with co-generated natural beat gestures for coherent multimodal communication. A comprehensive user study demonstrates the effectiveness of our approach. In scenarios where speech alone was ambiguous, gestures generated by our system significantly improved participants' ability to identify object properties, confirming their interpretability and communicative value. While challenges remain in representing complex shapes, our results highlight the importance of context-aware semantic gestures for creating expressive and collaborative virtual agents or avatars, marking a substantial step forward towards efficient and robust, embodied human-agent interaction. More information and example videos are available here: https://review-anon-io.github.io/ImaGGen.github.io/