🤖 AI Summary
Existing affective recognition models neglect inter-individual variability and suffer from limited accuracy in Valence-Arousal (VA) space modeling. Method: We introduce the first high-precision, multimodal affective recognition dataset integrating individual psychological traits, comprising EEG, ECG, inter-beat interval data, and standardized psychometric assessments (NEO-FFI, HADS, PANAS) from 64 participants. We systematically incorporate multimodal physiological signals with personality and affective disorder traits into a refined VA framework, employing dual paradigms—video-based emotion elicitation and the Montreal Imaging Stress Task (MIST)—to jointly modulate valence and arousal dimensions. Contribution/Results: The publicly released dataset features fine-grained individual annotations, substantially improving cross-subject generalizability and enabling robust personalized modeling. It establishes a critical data foundation for clinical辅助 diagnosis of affective disorders and adaptive human–machine affective interaction.
📝 Abstract
We introduce a novel multimodal emotion recognition dataset that enhances the precision of Valence-Arousal Model while accounting for individual differences. This dataset includes electroencephalography (EEG), electrocardiography (ECG), and pulse interval (PI) from 64 participants. Data collection employed two emotion induction paradigms: video stimuli that targeted different valence levels (positive, neutral, and negative) and the Mannheim Multicomponent Stress Test (MMST), which induced high arousal through cognitive, emotional, and social stressors. To enrich the dataset, participants' personality traits, anxiety, depression, and emotional states were assessed using validated questionnaires. By capturing a broad spectrum of affective responses while accounting for individual differences, this dataset provides a robust resource for precise emotion modeling. The integration of multimodal physiological data with psychological assessments lays a strong foundation for personalized emotion recognition. We anticipate this resource will support the development of more accurate, adaptive, and individualized emotion recognition systems across diverse applications.