🤖 AI Summary
Existing approaches struggle to model nonverbal cues and complex social dynamics inherent in real-world multi-person group interactions. This work proposes PolySLGen, the first framework capable of online generation of multimodal responses—including speech, body motion, and speaking status—for a target participant based on the historical dialogue and action sequences of all group members. Its core innovation lies in a pose fusion module and a social cue encoder that jointly capture group interaction dynamics, enabling coordinated generation of speech, motion, and speaking status through multimodal sequence modeling. Experiments demonstrate that PolySLGen significantly outperforms existing baselines in motion quality, speech-motion alignment, speaking status prediction, and human-perceived realism, producing outputs with strong contextual appropriateness and temporal coherence.
📝 Abstract
Human-like multimodal reaction generation is essential for natural group interactions between humans and embodied AI. However, existing approaches are limited to single-modality or speaking-only responses in dyadic interactions, making them unsuitable for realistic social scenarios. Many also overlook nonverbal cues and complex dynamics of polyadic interactions, both critical for engagement and conversational coherence. In this work, we present PolySLGen, an online framework for Polyadic multimodal Speaking and Listening reaction Generation. Given past conversation and motion from all participants, PolySLGen generates a future speaking or listening reaction for a target participant, including speech, body motion, and speaking state score. To model group interactions effectively, we propose a pose fusion module and a social cue encoder that jointly aggregate motion and social signals from the group. Extensive experiments, along with quantitative and qualitative evaluations, show that PolySLGen produces contextually appropriate and temporally coherent multi-modal reactions, outperforming several adapted and state-of-the-art baselines in motion quality, motion-speech alignment, speaking state prediction, and human-perceived realism.