🤖 AI Summary
This study investigates whether large language models (LLMs) possess context-invariant, global neural mechanisms for emotion representation and enables their controllable intervention. Addressing three core questions—(1) whether emotion mechanisms are context-independent, (2) their neural instantiation, and (3) feasibility of universal emotion control—we propose a systematic methodology: constructing a controllable emotion dataset (SEV), representational decomposition, causal mediation analysis, sublayer-wise influence quantification, and ablation/enhancement interventions. We first discover and empirically validate a “global affect circuit”—a consistent, cross-task and cross-context neural substrate encoding emotional valence—comprising specific neurons and attention heads. This circuit supports direct, prompt-free, parameter-level modulation, achieving 99.65% emotion classification accuracy on held-out test sets—substantially outperforming state-of-the-art prompting and steering-based approaches. Our work establishes a novel paradigm for understanding and controlling emotion mechanisms in LLMs.
📝 Abstract
As the demand for emotional intelligence in large language models (LLMs) grows, a key challenge lies in understanding the internal mechanisms that give rise to emotional expression and in controlling emotions in generated text. This study addresses three core questions: (1) Do LLMs contain context-agnostic mechanisms shaping emotional expression? (2) What form do these mechanisms take? (3) Can they be harnessed for universal emotion control? We first construct a controlled dataset, SEV (Scenario-Event with Valence), to elicit comparable internal states across emotions. Subsequently, we extract context-agnostic emotion directions that reveal consistent, cross-context encoding of emotion (Q1). We identify neurons and attention heads that locally implement emotional computation through analytical decomposition and causal analysis, and validate their causal roles via ablation and enhancement interventions. Next, we quantify each sublayer's causal influence on the model's final emotion representation and integrate the identified local components into coherent global emotion circuits that drive emotional expression (Q2). Directly modulating these circuits achieves 99.65% emotion-expression accuracy on the test set, surpassing prompting- and steering-based methods (Q3). To our knowledge, this is the first systematic study to uncover and validate emotion circuits in LLMs, offering new insights into interpretability and controllable emotional intelligence.