ReactDiff: Latent Diffusion for Facial Reaction Generation

📅 2025-05-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the problem of audio-visual driven listener facial response generation, aiming to jointly model response appropriateness, realism, and diversity. We propose the first end-to-end framework integrating a multimodal Transformer with latent-space conditional diffusion: intra- and inter-class cross-modal attention enables fine-grained audio-visual interaction modeling; conditional diffusion in a VAE latent space explicitly captures the one-to-many mapping from input stimuli to plausible responses, enhancing both diversity and realism. On standard benchmarks, our method achieves a response correlation of 0.26 (state-of-the-art) and a diversity score of 0.094, while attaining top-tier realism. The code is publicly available.

Technology Category

Application Category

📝 Abstract
Given the audio-visual clip of the speaker, facial reaction generation aims to predict the listener's facial reactions. The challenge lies in capturing the relevance between video and audio while balancing appropriateness, realism, and diversity. While prior works have mostly focused on uni-modal inputs or simplified reaction mappings, recent approaches such as PerFRDiff have explored multi-modal inputs and the one-to-many nature of appropriate reaction mappings. In this work, we propose the Facial Reaction Diffusion (ReactDiff) framework that uniquely integrates a Multi-Modality Transformer with conditional diffusion in the latent space for enhanced reaction generation. Unlike existing methods, ReactDiff leverages intra- and inter-class attention for fine-grained multi-modal interaction, while the latent diffusion process between the encoder and decoder enables diverse yet contextually appropriate outputs. Experimental results demonstrate that ReactDiff significantly outperforms existing approaches, achieving a facial reaction correlation of 0.26 and diversity score of 0.094 while maintaining competitive realism. The code is open-sourced at href{https://github.com/Hunan-Tiger/ReactDiff}{github}.
Problem

Research questions and friction points this paper is trying to address.

Generate listener's facial reactions from speaker's audio-visual clip
Balance relevance, appropriateness, realism, and diversity in reactions
Enhance multi-modal interaction and diverse yet context-aware outputs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-Modality Transformer for fine-grained interaction
Latent diffusion process for diverse outputs
Intra- and inter-class attention mechanisms
🔎 Similar Papers
No similar papers found.
J
Jiaming Li
School of Biomedical Engineering & State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, Shanghai, 201210, Shanghai, China
S
Sheng Wang
School of Biomedical Engineering & State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, Shanghai, 201210, Shanghai, China
X
Xin Wang
School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200030, Shanghai, China
Yitao Zhu
Yitao Zhu
Hong Kong Polytechnic University
Medical Image AnalysisComputer VisionFoundation Model
Honglin Xiong
Honglin Xiong
ShanghaiTech University
Zixu Zhuang
Zixu Zhuang
School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200030, Shanghai, China
Q
Qian Wang
School of Biomedical Engineering & State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, Shanghai, 201210, Shanghai, China; Shanghai Clinical Research and Trial Center, Shanghai, 201210, Shanghai, China