🤖 AI Summary
Modeling the dynamic multimodal coordination of speech, gestures, and facial expressions in face-to-face social interaction remains challenging for socially intelligent AI.
Method: We introduce the first large-scale dyadic audio-visual interaction dataset (4,000+ hours) and propose a cross-modal sequential model integrating ASR, visual behavioral encoding, LLM-driven speech generation, and 2D/3D rendering to generate context-aware coordinated actions. A novel cross-modal alignment network enhances dyadic action prediction accuracy.
Contribution/Results: Our framework enables fine-grained, emotion-state-, intensity-, and semantic-intent-conditioned controllable generation of gestures and facial expressions. Experiments demonstrate significant improvements in motion coherence and affective alignment of virtual agents. User studies confirm substantial gains in perceived naturalness and interaction quality, validating the efficacy of our approach for embodied social AI.
📝 Abstract
Human communication involves a complex interplay of verbal and nonverbal signals, essential for conveying meaning and achieving interpersonal goals. To develop socially intelligent AI technologies, it is crucial to develop models that can both comprehend and generate dyadic behavioral dynamics. To this end, we introduce the Seamless Interaction Dataset, a large-scale collection of over 4,000 hours of face-to-face interaction footage from over 4,000 participants in diverse contexts. This dataset enables the development of AI technologies that understand dyadic embodied dynamics, unlocking breakthroughs in virtual agents, telepresence experiences, and multimodal content analysis tools. We also develop a suite of models that utilize the dataset to generate dyadic motion gestures and facial expressions aligned with human speech. These models can take as input both the speech and visual behavior of their interlocutors. We present a variant with speech from an LLM model and integrations with 2D and 3D rendering methods, bringing us closer to interactive virtual agents. Additionally, we describe controllable variants of our motion models that can adapt emotional responses and expressivity levels, as well as generating more semantically-relevant gestures. Finally, we discuss methods for assessing the quality of these dyadic motion models, which are demonstrating the potential for more intuitive and responsive human-AI interactions.