🤖 AI Summary
Long-standing scarcity of large-scale, publicly available, annotated datasets has impeded progress in Thai speech emotion recognition (SER). To address this, we introduce THAI-SER—the first high-quality, open-source Thai SER corpus—comprising 27,854 utterances (41h36m) from 200 professional actors, covering five basic emotions, and recorded in both single-/multi-speaker and indoor/outdoor settings. Our methodology integrates professional director supervision, dual-environment recording, multi-stage crowdsourced annotation, and rigorous quality control (Krippendorff’s alpha = 0.692; inter-annotator agreement > 0.71). Human evaluation achieves 77.2% accuracy, and models trained on THAI-SER demonstrate robust in-dataset and cross-dataset performance. The full corpus and all experimental code are publicly released under the CC BY-SA 4.0 license, establishing critical infrastructure for SER research in low-resource languages.
📝 Abstract
We present the first sizeable corpus of Thai speech emotion recognition, THAI-SER, containing 41 hours and 36 minutes (27,854 utterances) from 100 recordings made in different recording environments: Zoom and two studio setups. The recordings contain both scripted and improvised sessions, acted by 200 professional actors (112 females and 88 males, aged 18 to 55) and were directed by professional directors. There are five primary emotions: neutral, angry, happy, sad, and frustrated, assigned to the actors when recording utterances. The utterances are annotated with an emotional category using crowdsourcing. To control the annotation process's quality, we also design an extensive filtering and quality control scheme to ensure that the majority agreement score remains above 0.71. We evaluate our annotated corpus using two metrics: inter-annotator reliability and human recognition accuracy. Inter-annotator reliability score was calculated using Krippendorff's alpha, where our corpus, after filtering, achieved an alpha score of 0.692, higher than a recommendation of 0.667. For human recognition accuracy, our corpus scored up to 0.772 post-filtering. We also provide the results of the model trained on the corpus evaluated on both in-corpus and cross-corpus setups. The corpus is publicly available under a Creative Commons BY-SA 4.0, as well as our codes for the experiments.