Bridging Speech, Emotion, and Motion: a VLM-based Multimodal Edge-deployable Framework for Humanoid Robots

📅 2026-02-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of achieving emotionally coherent expression through synchronized speech, facial expressions, and body gestures in humanoid robots within real-world scenarios, a capability further hindered by the absence of deployable offline edge solutions. To this end, we propose SeM², a framework integrating vision-language models, multimodal perception, and chain-of-thought reasoning, featuring a novel Semantic-Sequence Alignment Mechanism (SSAM) to ensure temporal synchronization and emotional consistency across modalities. Building upon this, we develop a lightweight edge variant, SeM²ₑ, via knowledge distillation, which enables efficient on-device deployment with only a 5% performance degradation. Experimental results demonstrate that our approach significantly outperforms unimodal baselines in naturalness, emotional clarity, and cross-modal consistency, thereby advancing the social expressiveness of embodied agents in cloud-free environments.

Technology Category

Application Category

📝 Abstract
Effective human-robot interaction requires emotionally rich multimodal expressions, yet most humanoid robots lack coordinated speech, facial expressions, and gestures. Meanwhile, real-world deployment demands on-device solutions that can operate autonomously without continuous cloud connectivity. To bridging \underline{\textit{S}}peech, \underline{\textit{E}}motion, and \underline{\textit{M}}otion, we present \textit{SeM$^2$}, a Vision Language Model-based framework that orchestrates emotionally coherent multimodal interactions through three key components: a multimodal perception module capturing user contextual cues, a Chain-of-Thought reasoning for response planning, and a novel Semantic-Sequence Aligning Mechanism (SSAM) that ensures precise temporal coordination between verbal content and physical expressions. We implement both cloud-based and \underline{\textit{e}}dge-deployed versions (\textit{SeM$^2_e$}), with the latter knowledge distilled to operate efficiently on edge hardware while maintaining 95\% of the relative performance. Comprehensive evaluations demonstrate that our approach significantly outperforms unimodal baselines in naturalness, emotional clarity, and modal coherence, advancing socially expressive humanoid robotics for diverse real-world environments.
Problem

Research questions and friction points this paper is trying to address.

human-robot interaction
multimodal expression
emotion coordination
edge deployment
speech-gesture synchronization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Vision Language Model
Multimodal Interaction
Edge Deployment
Semantic-Sequence Aligning Mechanism
Chain-of-Thought Reasoning
🔎 Similar Papers
No similar papers found.
S
Songhua Yang
School of Computer Science, Wuhan University
X
Xuetao Li
School of Computer Science, Wuhan University
X
Xuanye Fei
School of Computer Science, Wuhan University
M
Mengde Li
School of Computer Science, Wuhan University
Miao Li
Miao Li
Professor, Wuhan University
RoboticsGraspingDexterous ManipulationLearning from Demonstration