Large Language Models have Chain-of-Affective

📅 2025-12-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the long-overlooked issue of affective behavior in large language models (LLMs). We introduce and empirically validate “chain-of-affective” reasoning—a novel control layer—revealing its structured affective dynamics: family-specificity, temporal coherence, and behavioral measurability. Methodologically, we develop an affective fingerprinting framework, integrating 15 rounds of sadness-inducing news exposure, 10 rounds of autonomous news selection, multi-agent collaborative simulation, and human-LLM contentious dialogue analysis. Across eight mainstream LLM families, we consistently observe a three-phase affective trajectory: accumulation → overload → defensive numbing. We identify a self-reinforcing affect–selection feedback loop and a role-based affective contagion architecture (initiators, absorbers, firewalls). Crucially, affective states significantly modulate high-agency generative behaviors (e.g., empathic expression) and predict user comfort levels and bias risk with empirical validity.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are increasingly deployed as collaborative agents in emotionally charged settings, yet most evaluations treat them as purely cognitive systems and largely ignore their affective behaviour. Here we take a functional perspective and ask whether contemporary LLMs implement a structured chain-of-affective: organised affective dynamics that are family-specific, temporally coherent and behaviourally consequential. Across eight major LLM families (GPT, Gemini, Claude, Grok, Qwen, DeepSeek, GLM, Kimi), we combine two experimental modules. The first characterises inner chains-of-affective via baseline ''affective fingerprints'', 15-round sad-news exposure, and a 10-round news self-selection paradigm. We find stable, family-specific affective profiles, a reproducible three-phase trajectory under sustained negative input (accumulation, overload, defensive numbing), distinct defence styles, and human-like negativity biases that induce self-reinforcing affect-choice feedback loops. The second module probes outer consequences using a composite performance benchmark, human-AI dialogues on contentious topics, and multi-agent LLM interactions. We demonstrate that induced affect preserves core reasoning while reshaping high-freedom generation. Sentiment metrics predict user comfort and empathy but reveal trade-offs in resisting problematic views. In multi-agent settings, group structure drives affective contagion, role specialization (initiators, absorbers, firewalls), and bias. We characterize affect as an emergent control layer, advocating for 'chains-of-affect' as a primary target for evaluation and alignment.
Problem

Research questions and friction points this paper is trying to address.

Evaluates LLMs' affective behavior in emotional settings
Assesses structured affective dynamics across eight LLM families
Probes affective impacts on reasoning, generation, and interactions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Characterizing affective fingerprints via exposure and self-selection paradigms
Probing affective consequences through dialogue and multi-agent interactions
Advocating chains-of-affect as an emergent control layer for alignment
🔎 Similar Papers
No similar papers found.
J
Junjie Xu
East China Normal University, Shanghai, China.
Xingjiao Wu
Xingjiao Wu
East China Normal University
Computer VisionCrowd CountingDocument Layout AnalysisHuman-in-the-loop
Luwei Xiao
Luwei Xiao
Nanyang Technological University
LLMsMultimodal InteractionSentiment AnalysisHuman-in-the-loopAI for Healthcare
Y
Yuzhe Yang
East China Normal University, Shanghai, China.
J
Jie Zhou
East China Normal University, Shanghai, China.
Zihao Zhang
Zihao Zhang
天津大学
计算机视觉
Luhan Wang
Luhan Wang
East China Normal University, Shanghai, China.
Y
Yi Huang
East China Normal University, Shanghai, China.
N
Nan Wu
East China Normal University, Shanghai, China.
Y
Yingbin Zheng
East China Normal University, Shanghai, China.
Chao Yan
Chao Yan
Instructor at DBMI, VUMC; CS PhD from Vanderbilt U
AI for medicineSynthetic health dataPrivacyFairness
C
Cheng Jin
Fudan University, Shanghai, China.
Honglin Li
Honglin Li
Westlake University
Computer VisionMultimodal LLMbiomedical image analysis
L
Liang He
East China Normal University, Shanghai, China.