ViBES: A Conversational Agent with Behaviorally-Intelligent 3D Virtual Body

📅 2025-12-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing co-speech gesture generation systems predominantly model language-to-motion translation as a static mapping task, lacking joint decision-making capabilities regarding *when* to move, *how* to move, and *how to dynamically adapt* gestures to conversational context—leading to temporal fragility, weak social interactivity, and disintegration across speech, text, and motion modalities. This paper introduces the first Speech-Language-Behavior (SLB) unified modeling framework. We propose a Modality-Partitioned Mixture-of-Experts (MoME) architecture that enables multimodal synchronous planning via cross-modal hard routing and cross-attention. Furthermore, we integrate streaming behavior control hooks and multimodal token interleaving to support end-to-end co-generation and hybrid-initiated interaction of speech, facial expressions, and body gestures. Evaluations on multi-turn dialogues demonstrate significant improvements in gesture–language alignment and behavioral naturalness, comprehensively outperforming state-of-the-art co-speech gesture and text-driven baselines.

Technology Category

Application Category

📝 Abstract
Human communication is inherently multimodal and social: words, prosody, and body language jointly carry intent. Yet most prior systems model human behavior as a translation task co-speech gesture or text-to-motion that maps a fixed utterance to motion clips-without requiring agentic decision-making about when to move, what to do, or how to adapt across multi-turn dialogue. This leads to brittle timing, weak social grounding, and fragmented stacks where speech, text, and motion are trained or inferred in isolation. We introduce ViBES (Voice in Behavioral Expression and Synchrony), a conversational 3D agent that jointly plans language and movement and executes dialogue-conditioned body actions. Concretely, ViBES is a speech-language-behavior (SLB) model with a mixture-of-modality-experts (MoME) backbone: modality-partitioned transformer experts for speech, facial expression, and body motion. The model processes interleaved multimodal token streams with hard routing by modality (parameters are split per expert), while sharing information through cross-expert attention. By leveraging strong pretrained speech-language models, the agent supports mixed-initiative interaction: users can speak, type, or issue body-action directives mid-conversation, and the system exposes controllable behavior hooks for streaming responses. We further benchmark on multi-turn conversation with automatic metrics of dialogue-motion alignment and behavior quality, and observe consistent gains over strong co-speech and text-to-motion baselines. ViBES goes beyond "speech-conditioned motion generation" toward agentic virtual bodies where language, prosody, and movement are jointly generated, enabling controllable, socially competent 3D interaction. Code and data will be made available at: ai.stanford.edu/~juze/ViBES/
Problem

Research questions and friction points this paper is trying to address.

Develops a conversational 3D agent that jointly plans language and movement
Addresses fragmented modeling of speech, text, and motion in prior systems
Enables controllable, socially competent multimodal interaction in dialogue
Innovation

Methods, ideas, or system contributions that make the work stand out.

Jointly plans language and movement for dialogue
Uses modality-partitioned transformer experts for speech, face, body
Leverages pretrained speech-language models for mixed-initiative interaction
🔎 Similar Papers
No similar papers found.