Advancing Face-to-Face Emotion Communication: A Multimodal Dataset (AFFEC)

📅 2025-04-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing affective computing datasets inadequately capture the subtle nonverbal cues, inter-individual variability, and dynamic nature of real-world face-to-face interactions, largely due to their reliance on single-modal data and highly controlled laboratory settings. To address this, we introduce AFFEC—the first multimodal affect dataset grounded in ecologically valid simulated dialogues—comprising 73 participants, 84 dyadic conversations, and over 5,000 trials. We synchronously record EEG, eye-tracking, galvanic skin response (GSR), facial video, and Big Five personality traits, and explicitly disentangle *self-reported affect* from *perceived affect* for the first time. Methodologically, we integrate neurophysiological, behavioral, and personality dimensions via early multimodal fusion and personality-conditioned modeling. Baseline models significantly outperform chance in arousal classification; incorporating personality features further yields statistically significant improvements in self-reported affect prediction, confirming the critical role of individual differences. AFFEC bridges the gap between controlled experimentation and naturalistic social interaction.

Technology Category

Application Category

📝 Abstract
Emotion recognition has the potential to play a pivotal role in enhancing human-computer interaction by enabling systems to accurately interpret and respond to human affect. Yet, capturing emotions in face-to-face contexts remains challenging due to subtle nonverbal cues, variations in personal traits, and the real-time dynamics of genuine interactions. Existing emotion recognition datasets often rely on limited modalities or controlled conditions, thereby missing the richness and variability found in real-world scenarios. In this work, we introduce Advancing Face-to-Face Emotion Communication (AFFEC), a multimodal dataset designed to address these gaps. AFFEC encompasses 84 simulated emotional dialogues across six distinct emotions, recorded from 73 participants over more than 5,000 trials and annotated with more than 20,000 labels. It integrates electroencephalography (EEG), eye-tracking, galvanic skin response (GSR), facial videos, and Big Five personality assessments. Crucially, AFFEC explicitly distinguishes between felt emotions (the participant's internal affect) and perceived emotions (the observer's interpretation of the stimulus). Baseline analyses spanning unimodal features and straightforward multimodal fusion demonstrate that even minimal processing yields classification performance significantly above chance, especially for arousal. Incorporating personality traits further improves predictions of felt emotions, highlighting the importance of individual differences. By bridging controlled experimentation with more realistic face-to-face stimuli, AFFEC offers a unique resource for researchers aiming to develop context-sensitive, adaptive, and personalized emotion recognition models.
Problem

Research questions and friction points this paper is trying to address.

Challenges in capturing real-time face-to-face emotion dynamics
Limited modalities in existing emotion recognition datasets
Need for distinguishing felt vs perceived emotions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal dataset integrating EEG, eye-tracking, GSR, facial videos
Distinguishes between felt and perceived emotions explicitly
Incorporates personality traits to improve emotion prediction
🔎 Similar Papers
No similar papers found.
M
Meisam J. Sekiavandi
IT University of Copenhagen, Pioneer Centre for Artificial Intelligence
L
Laurits Dixen
IT University of Copenhagen, Pioneer Centre for Artificial Intelligence
J
Jostein Fimland
IT University of Copenhagen, Pioneer Centre for Artificial Intelligence
S
Sree Keerthi Desu
Technical University of Denmark
A
Antonia-Bianca Zserai
IT University of Copenhagen
Y
Ye Sul Lee
IT University of Copenhagen
Maria Barrett
Maria Barrett
AMD Silo AI
eye trackingnatural language processingsyntaxcognitive modeling
P
Paolo Burre
IT University of Copenhagen, Pioneer Centre for Artificial Intelligence