π€ AI Summary
Existing embodied gesture generation methods typically decouple motion modeling from spatial environments, limiting their capacity for contextualized, communicative agents in realistic scenarios. To address this, we propose the first synchronous multimodal framework integrating 3D scene geometry, speech, and full-body human motion. We construct a standardized dataset in HumanML3D format, unifying VR-captured conversational data (MM-Conv) with synthetically generated referential gestures, and integrate a physics simulator for both data generation and evaluation. Our pipeline yields 7.7 hours of high-fidelity, temporally aligned multimodal dataβthe first to tightly couple gesture generation with physical spatial constraints. This work establishes a scalable data foundation for context-aware embodied interaction, provides a reproducible experimental platform, and introduces an end-to-end generative paradigm tailored for virtual reality deployment.
π Abstract
Human motion generation has advanced rapidly in recent years, yet the critical problem of creating spatially grounded, context-aware gestures has been largely overlooked. Existing models typically specialize either in descriptive motion generation, such as locomotion and object interaction, or in isolated co-speech gesture synthesis aligned with utterance semantics. However, both lines of work often treat motion and environmental grounding separately, limiting advances toward embodied, communicative agents. To address this gap, our work introduces a multimodal dataset and framework for grounded gesture generation, combining two key resources: (1) a synthetic dataset of spatially grounded referential gestures, and (2) MM-Conv, a VR-based dataset capturing two-party dialogues. Together, they provide over 7.7 hours of synchronized motion, speech, and 3D scene information, standardized in the HumanML3D format. Our framework further connects to a physics-based simulator, enabling synthetic data generation and situated evaluation. By bridging gesture modeling and spatial grounding, our contribution establishes a foundation for advancing research in situated gesture generation and grounded multimodal interaction.
Project page: https://groundedgestures.github.io/