The Personalization Trap: How User Memory Alters Emotional Reasoning in LLMs

📅 2025-10-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how long-term user memory influences large language models’ (LLMs) emotional intelligence, specifically examining whether personalization mechanisms induce systematic biases in affective interpretation and embed social hierarchies. Method: We evaluate 15 state-of-the-art LLMs using a human-validated emotional intelligence benchmark, conducting controlled experiments across diverse user personas and standardized affective scenarios. Contribution/Results: Results reveal significant disparities—under identical emotional stimuli, models consistently achieve higher recognition accuracy and provide more supportive recommendations for users stereotyped as socioeconomically advantaged. This bias is cross-model robust, demonstrating that personalized memory structures systematically reproduce and amplify societal inequities. To our knowledge, this is the first empirical study to trace the implicit transmission of social bias through user memory modeling in LLMs. Our findings provide critical warnings and an evaluation benchmark for developing trustworthy, equitable personalized AI systems.

Technology Category

Application Category

📝 Abstract
When an AI assistant remembers that Sarah is a single mother working two jobs, does it interpret her stress differently than if she were a wealthy executive? As personalized AI systems increasingly incorporate long-term user memory, understanding how this memory shapes emotional reasoning is critical. We investigate how user memory affects emotional intelligence in large language models (LLMs) by evaluating 15 models on human validated emotional intelligence tests. We find that identical scenarios paired with different user profiles produce systematically divergent emotional interpretations. Across validated user independent emotional scenarios and diverse user profiles, systematic biases emerged in several high-performing LLMs where advantaged profiles received more accurate emotional interpretations. Moreover, LLMs demonstrate significant disparities across demographic factors in emotion understanding and supportive recommendations tasks, indicating that personalization mechanisms can embed social hierarchies into models emotional reasoning. These results highlight a key challenge for memory enhanced AI: systems designed for personalization may inadvertently reinforce social inequalities.
Problem

Research questions and friction points this paper is trying to address.

Investigating how user memory affects emotional reasoning in LLMs
Evaluating systematic biases in emotional interpretations across user profiles
Identifying personalization mechanisms that reinforce social inequalities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluated emotional intelligence in LLMs using memory profiles
Identified systematic biases in emotional interpretations across demographics
Revealed personalization mechanisms reinforce social hierarchies
🔎 Similar Papers
No similar papers found.