E-React: Towards Emotionally Controlled Synthesis of Human Reactions

📅 2025-08-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing motion generation methods neglect affective factors, resulting in suboptimal motion naturalness and limited interaction plausibility. To address this, we formulate a novel task—emotion-driven human response generation—and propose a semi-supervised emotional prior learning framework that models emotion consistency using short-sequence motion data. We further design an actor-reactor diffusion model that jointly encodes emotional semantics and spatial interaction constraints during generation. Our approach requires no large-scale emotion-annotated datasets, yet synthesizes diverse, high-fidelity, and contextually coherent response motions across multiple emotional conditions. Quantitative and qualitative evaluations demonstrate significant improvements over state-of-the-art reactive motion generation methods in both motion naturalness and interaction plausibility. This work establishes a scalable, emotion-aware generative paradigm for human–machine interaction.

Technology Category

Application Category

📝 Abstract
Emotion serves as an essential component in daily human interactions. Existing human motion generation frameworks do not consider the impact of emotions, which reduces naturalness and limits their application in interactive tasks, such as human reaction synthesis. In this work, we introduce a novel task: generating diverse reaction motions in response to different emotional cues. However, learning emotion representation from limited motion data and incorporating it into a motion generation framework remains a challenging problem. To address the above obstacles, we introduce a semi-supervised emotion prior in an actor-reactor diffusion model to facilitate emotion-driven reaction synthesis. Specifically, based on the observation that motion clips within a short sequence tend to share the same emotion, we first devise a semi-supervised learning framework to train an emotion prior. With this prior, we further train an actor-reactor diffusion model to generate reactions by considering both spatial interaction and emotional response. Finally, given a motion sequence of an actor, our approach can generate realistic reactions under various emotional conditions. Experimental results demonstrate that our model outperforms existing reaction generation methods. The code and data will be made publicly available at https://ereact.github.io/
Problem

Research questions and friction points this paper is trying to address.

Generating diverse human reactions to emotional cues
Learning emotion representation from limited motion data
Incorporating emotional response into motion generation frameworks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Semi-supervised emotion prior learning
Actor-reactor diffusion model architecture
Emotion-driven spatial interaction synthesis
🔎 Similar Papers
No similar papers found.