🤖 AI Summary
Existing speech emotion recognition (SER) datasets suffer from coarse emotional granularity, high privacy risks, and substantial performative bias, lacking psychologically validated, trustworthy benchmarks. To address these limitations, this work introduces the first fine-grained, privacy-preserving SER benchmark: built upon 4,500+ hours of synthetic speech (11 voices × 40 emotions × 4 languages), it features dual-dimensional expert-validated annotations—comprising 40 discrete emotion categories and intensity levels—covering low-frequency, sensitive emotions (e.g., shame, awe). Methodologically, we propose scenario-driven script generation, a multi-tier collaborative annotation protocol, and Empathic Insight Voice (EIV), a novel neural architecture. Experiments demonstrate that EIV achieves unprecedented agreement with human experts. Furthermore, ablation studies reveal significantly higher accuracy for high-arousal emotions (e.g., anger) versus low-arousal states (e.g., concentration), providing critical empirical evidence for SER interpretability and psychological validity.
📝 Abstract
The advancement of text-to-speech and audio generation models necessitates robust benchmarks for evaluating the emotional understanding capabilities of AI systems. Current speech emotion recognition (SER) datasets often exhibit limitations in emotional granularity, privacy concerns, or reliance on acted portrayals. This paper introduces EmoNet-Voice, a new resource for speech emotion detection, which includes EmoNet-Voice Big, a large-scale pre-training dataset (featuring over 4,500 hours of speech across 11 voices, 40 emotions, and 4 languages), and EmoNet-Voice Bench, a novel benchmark dataset with human expert annotations. EmoNet-Voice is designed to evaluate SER models on a fine-grained spectrum of 40 emotion categories with different levels of intensities. Leveraging state-of-the-art voice generation, we curated synthetic audio snippets simulating actors portraying scenes designed to evoke specific emotions. Crucially, we conducted rigorous validation by psychology experts who assigned perceived intensity labels. This synthetic, privacy-preserving approach allows for the inclusion of sensitive emotional states often absent in existing datasets. Lastly, we introduce Empathic Insight Voice models that set a new standard in speech emotion recognition with high agreement with human experts. Our evaluations across the current model landscape exhibit valuable findings, such as high-arousal emotions like anger being much easier to detect than low-arousal states like concentration.