Fluent but Unfeeling: The Emotional Blind Spots of Language Models

📅 2025-09-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the alignment between large language models (LLMs) and human self-reported emotions in fine-grained sentiment recognition—moving beyond conventional coarse-grained classification. To this end, we introduce EXPRESS, the first benchmark dataset for fine-grained self-reported emotion analysis, comprising 251 authentic user-emotion labels. Grounded in classical emotion theory, we decompose model predictions into eight basic emotion categories, establishing an interpretable, fine-grained alignment evaluation framework. Through systematic experiments across diverse prompting strategies, we evaluate leading LLMs and find that while they generate theoretically plausible emotion terms, their capacity to capture context-dependent, nuanced affective states remains substantially inferior to human performance. Our core contributions are threefold: (1) the EXPRESS dataset; (2) a novel fine-grained emotion decomposition and alignment evaluation paradigm; and (3) empirical characterization of the fundamental limits of LLMs’ emotional alignment capability.

Technology Category

Application Category

📝 Abstract
The versatility of Large Language Models (LLMs) in natural language understanding has made them increasingly popular in mental health research. While many studies explore LLMs' capabilities in emotion recognition, a critical gap remains in evaluating whether LLMs align with human emotions at a fine-grained level. Existing research typically focuses on classifying emotions into predefined, limited categories, overlooking more nuanced expressions. To address this gap, we introduce EXPRESS, a benchmark dataset curated from Reddit communities featuring 251 fine-grained, self-disclosed emotion labels. Our comprehensive evaluation framework examines predicted emotion terms and decomposes them into eight basic emotions using established emotion theories, enabling a fine-grained comparison. Systematic testing of prevalent LLMs under various prompt settings reveals that accurately predicting emotions that align with human self-disclosed emotions remains challenging. Qualitative analysis further shows that while certain LLMs generate emotion terms consistent with established emotion theories and definitions, they sometimes fail to capture contextual cues as effectively as human self-disclosures. These findings highlight the limitations of LLMs in fine-grained emotion alignment and offer insights for future research aimed at enhancing their contextual understanding.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs' fine-grained emotion alignment with humans
Assessing nuanced emotional expressions beyond predefined categories
Testing contextual emotion cue capture in language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Created EXPRESS benchmark dataset with fine-grained emotion labels
Developed evaluation framework decomposing emotions into eight categories
Systematically tested LLMs under various prompt settings for alignment
🔎 Similar Papers
No similar papers found.
Bangzhao Shu
Bangzhao Shu
Northeastern University
Natural Language ProcessingComputational Social ScienceComputational Linguistics
I
Isha Joshi
Northeastern University
M
Melissa Karnaze
UC San Diego
A
Anh C. Pham
University of Massachusetts Amherst
I
Ishita Kakkar
University of Massachusetts Amherst
S
Sindhu Kothe
UC San Diego
A
Arpine Hovasapian
Independent Researcher
M
Mai ElSherief
Northeastern University