Emotion is Not Just a Label: Latent Emotional Factors in LLM Processing

📅 2026-03-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the common oversimplification of emotion as discrete categorical labels in existing research, which overlooks its role as a latent representational variable influencing reasoning in large language models. The study proposes a novel emotion-regularized training framework that explicitly models emotion as a latent factor shaping attention structures, thereby constraining representation drift under emotional conditions. To support this approach, the authors introduce AURA-QA, the first emotion-balanced, human-authored question-answering dataset. Through geometric analyses of attention mechanisms—including locality, centroid distance, and entropy—the paper demonstrates the systematic impact of emotion on internal model dynamics. Experiments show that the proposed method not only enhances robustness in emotionally charged contexts but also yields consistent in-domain performance gains on non-emotional benchmarks, substantiating emotion’s substantive role as a latent variable in reasoning processes.

Technology Category

Application Category

📝 Abstract
Large language models are routinely deployed on text that varies widely in emotional tone, yet their reasoning behavior is typically evaluated without accounting for emotion as a source of representational variation. Prior work has largely treated emotion as a prediction target, for example in sentiment analysis or emotion classification. In contrast, we study emotion as a latent factor that shapes how models attend to and reason over text. We analyze how emotional tone systematically alters attention geometry in transformer models, showing that metrics such as locality, center-of-mass distance, and entropy vary across emotions and correlate with downstream question-answering performance. To facilitate controlled study of these effects, we introduce Affect-Uniform ReAding QA (AURA-QA), a question-answering dataset with emotionally balanced, human-authored context passages. Finally, an emotional regularization framework is proposed that constrains emotion-conditioned representational drift during training. Experiments across multiple QA benchmarks demonstrate that this approach improves reading comprehension in both emotionally-varying and non-emotionally varying datasets, yielding consistent gains under distribution shift and in-domain improvements on several benchmarks.
Problem

Research questions and friction points this paper is trying to address.

emotion
large language models
attention
representational variation
question answering
Innovation

Methods, ideas, or system contributions that make the work stand out.

latent emotional factors
attention geometry
emotional regularization
AURA-QA
representational drift
🔎 Similar Papers
No similar papers found.