🤖 AI Summary
Low-resource languages suffer from severe performance bottlenecks in sentiment analysis due to the scarcity of high-quality labeled data. To address this, we introduce BRIGHTER, a multilingual, multi-label sentiment dataset covering 28 languages—including numerous low-resource varieties from Africa, Asia, Eastern Europe, and Latin America—annotated collaboratively by native speakers across diverse domains under a rigorous quality control framework. This work presents the first systematic multilingual, multi-label benchmark supporting joint recognition of sentiment categories and intensity levels. We propose a collaborative annotation framework and data governance methodology specifically designed for low-resource settings. We establish new state-of-the-art baselines on monolingual and cross-lingual multi-label classification tasks, revealing substantial inter-lingual performance disparities. Furthermore, we empirically delineate the effectiveness boundaries of LLM-augmented strategies, achieving an average 32.7% F1-score improvement across all 28 languages.
📝 Abstract
People worldwide use language in subtle and complex ways to express emotions. While emotion recognition -- an umbrella term for several NLP tasks -- significantly impacts different applications in NLP and other fields, most work in the area is focused on high-resource languages. Therefore, this has led to major disparities in research and proposed solutions, especially for low-resource languages that suffer from the lack of high-quality datasets. In this paper, we present BRIGHTER-- a collection of multilabeled emotion-annotated datasets in 28 different languages. BRIGHTER covers predominantly low-resource languages from Africa, Asia, Eastern Europe, and Latin America, with instances from various domains annotated by fluent speakers. We describe the data collection and annotation processes and the challenges of building these datasets. Then, we report different experimental results for monolingual and crosslingual multi-label emotion identification, as well as intensity-level emotion recognition. We investigate results with and without using LLMs and analyse the large variability in performance across languages and text domains. We show that BRIGHTER datasets are a step towards bridging the gap in text-based emotion recognition and discuss their impact and utility.