Emergence of Hierarchical Emotion Organization in Large Language Models

📅 2025-07-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how large language models (LLMs) intrinsically model user affective states to support ethical deployment in conversational agents. To address this, we introduce the psychological circumplex model of emotion into LLM representation analysis—integrating probabilistic dependency modeling, cross-group misclassification detection, and human-grounded behavioral experiments. Our analysis reveals that LLMs spontaneously develop a hierarchical emotion tree aligned with human psychological structure, with model scale positively correlating with emotional granularity and hierarchical depth. Critically, we identify systematic affective recognition biases against marginalized groups—particularly individuals from lower socioeconomic backgrounds. Beyond confirming the existence of human-like affective organization in LLMs, this work establishes the first cognitively grounded, theory-driven framework for interpretable emotion representation analysis in foundation models. It thus provides both theoretical foundations and empirical evidence for advancing fairness, reliability, and ethical accountability in affective AI systems.

Technology Category

Application Category

📝 Abstract
As large language models (LLMs) increasingly power conversational agents, understanding how they model users' emotional states is critical for ethical deployment. Inspired by emotion wheels -- a psychological framework that argues emotions organize hierarchically -- we analyze probabilistic dependencies between emotional states in model outputs. We find that LLMs naturally form hierarchical emotion trees that align with human psychological models, and larger models develop more complex hierarchies. We also uncover systematic biases in emotion recognition across socioeconomic personas, with compounding misclassifications for intersectional, underrepresented groups. Human studies reveal striking parallels, suggesting that LLMs internalize aspects of social perception. Beyond highlighting emergent emotional reasoning in LLMs, our results hint at the potential of using cognitively-grounded theories for developing better model evaluations.
Problem

Research questions and friction points this paper is trying to address.

Understanding hierarchical emotion modeling in large language models
Identifying biases in emotion recognition across socioeconomic personas
Exploring cognitive theories for better model evaluations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analyzing probabilistic dependencies in emotional states
Discovering hierarchical emotion trees in LLMs
Identifying biases in emotion recognition across personas
🔎 Similar Papers
No similar papers found.
B
Bo Zhao
CBS-NTT Program in Physics of Intelligence, Harvard University; University of California, San Diego
M
Maya Okawa
CBS-NTT Program in Physics of Intelligence, Harvard University; Physics of Artificial Intelligence Laboratories, NTT Research, Inc.
E
Eric J. Bigelow
CBS-NTT Program in Physics of Intelligence, Harvard University; Department of Psychology, Harvard University
Rose Yu
Rose Yu
Associate Professor, University of California, San Diego
Machine LearningComputational Sustainability
Tomer Ullman
Tomer Ullman
Assistant Professor, Harvard
Cognitive ScienceComputational ModelingCognitive DevelopmentArtificial Intelligence
Ekdeep Singh Lubana
Ekdeep Singh Lubana
Goodfire AI
AIMachine LearningDeep Learning
H
Hidenori Tanaka
CBS-NTT Program in Physics of Intelligence, Harvard University; Physics of Artificial Intelligence Laboratories, NTT Research, Inc.