Grounded Gesture Generation: Language, Motion, and Space

πŸ“… 2025-07-06
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing embodied gesture generation methods typically decouple motion modeling from spatial environments, limiting their capacity for contextualized, communicative agents in realistic scenarios. To address this, we propose the first synchronous multimodal framework integrating 3D scene geometry, speech, and full-body human motion. We construct a standardized dataset in HumanML3D format, unifying VR-captured conversational data (MM-Conv) with synthetically generated referential gestures, and integrate a physics simulator for both data generation and evaluation. Our pipeline yields 7.7 hours of high-fidelity, temporally aligned multimodal dataβ€”the first to tightly couple gesture generation with physical spatial constraints. This work establishes a scalable data foundation for context-aware embodied interaction, provides a reproducible experimental platform, and introduces an end-to-end generative paradigm tailored for virtual reality deployment.

Technology Category

Application Category

πŸ“ Abstract
Human motion generation has advanced rapidly in recent years, yet the critical problem of creating spatially grounded, context-aware gestures has been largely overlooked. Existing models typically specialize either in descriptive motion generation, such as locomotion and object interaction, or in isolated co-speech gesture synthesis aligned with utterance semantics. However, both lines of work often treat motion and environmental grounding separately, limiting advances toward embodied, communicative agents. To address this gap, our work introduces a multimodal dataset and framework for grounded gesture generation, combining two key resources: (1) a synthetic dataset of spatially grounded referential gestures, and (2) MM-Conv, a VR-based dataset capturing two-party dialogues. Together, they provide over 7.7 hours of synchronized motion, speech, and 3D scene information, standardized in the HumanML3D format. Our framework further connects to a physics-based simulator, enabling synthetic data generation and situated evaluation. By bridging gesture modeling and spatial grounding, our contribution establishes a foundation for advancing research in situated gesture generation and grounded multimodal interaction. Project page: https://groundedgestures.github.io/
Problem

Research questions and friction points this paper is trying to address.

Generating spatially grounded, context-aware human gestures
Bridging motion generation and environmental grounding in gestures
Creating embodied, communicative agents with multimodal interaction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal dataset for grounded gesture generation
Combines synthetic and VR-based dialogue datasets
Connects to physics simulator for evaluation
πŸ”Ž Similar Papers
No similar papers found.