EmoNet-Voice: A Fine-Grained, Expert-Verified Benchmark for Speech Emotion Detection

📅 2025-06-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing speech emotion recognition (SER) datasets suffer from coarse emotional granularity, high privacy risks, and substantial performative bias, lacking psychologically validated, trustworthy benchmarks. To address these limitations, this work introduces the first fine-grained, privacy-preserving SER benchmark: built upon 4,500+ hours of synthetic speech (11 voices × 40 emotions × 4 languages), it features dual-dimensional expert-validated annotations—comprising 40 discrete emotion categories and intensity levels—covering low-frequency, sensitive emotions (e.g., shame, awe). Methodologically, we propose scenario-driven script generation, a multi-tier collaborative annotation protocol, and Empathic Insight Voice (EIV), a novel neural architecture. Experiments demonstrate that EIV achieves unprecedented agreement with human experts. Furthermore, ablation studies reveal significantly higher accuracy for high-arousal emotions (e.g., anger) versus low-arousal states (e.g., concentration), providing critical empirical evidence for SER interpretability and psychological validity.

Technology Category

Application Category

📝 Abstract
The advancement of text-to-speech and audio generation models necessitates robust benchmarks for evaluating the emotional understanding capabilities of AI systems. Current speech emotion recognition (SER) datasets often exhibit limitations in emotional granularity, privacy concerns, or reliance on acted portrayals. This paper introduces EmoNet-Voice, a new resource for speech emotion detection, which includes EmoNet-Voice Big, a large-scale pre-training dataset (featuring over 4,500 hours of speech across 11 voices, 40 emotions, and 4 languages), and EmoNet-Voice Bench, a novel benchmark dataset with human expert annotations. EmoNet-Voice is designed to evaluate SER models on a fine-grained spectrum of 40 emotion categories with different levels of intensities. Leveraging state-of-the-art voice generation, we curated synthetic audio snippets simulating actors portraying scenes designed to evoke specific emotions. Crucially, we conducted rigorous validation by psychology experts who assigned perceived intensity labels. This synthetic, privacy-preserving approach allows for the inclusion of sensitive emotional states often absent in existing datasets. Lastly, we introduce Empathic Insight Voice models that set a new standard in speech emotion recognition with high agreement with human experts. Our evaluations across the current model landscape exhibit valuable findings, such as high-arousal emotions like anger being much easier to detect than low-arousal states like concentration.
Problem

Research questions and friction points this paper is trying to address.

Lack of fine-grained emotional benchmarks for speech AI evaluation
Privacy concerns and acted portrayals limit current SER datasets
Need for expert-validated, synthetic speech emotion detection resources
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large-scale dataset with 40 fine-grained emotions
Synthetic audio validated by psychology experts
Privacy-preserving approach for sensitive emotions
🔎 Similar Papers
No similar papers found.