๐ค AI Summary
This study addresses the behavioral inconsistency of large language models (LLMs) under identical personality prompts across varying conversational contextsโa phenomenon that has sparked debate over whether it reflects a flaw or human-like contextual adaptability. Introducing the holistic trait theory into LLM personality research for the first time, the work systematically examines how contextual factors modulate LLMsโ linguistic patterns, behavioral tendencies, and emotional expressions through dialogue experiments in four scenarios: ice-breaking, negotiation, group decision-making, and empathy. The findings reveal that LLMs exhibit context-sensitive personality expression, dynamically adjusting their traits and emotional tones in response to social and affective demands. This challenges the traditional evaluation paradigm centered on behavioral consistency and demonstrates that LLMs possess human-like contextual adaptability.
๐ Abstract
Large Language Models (LLMs) can be conditioned with explicit personality prompts, yet their behavioral realization often varies depending on context. This study examines how identical personality prompts lead to distinct linguistic, behavioral, and emotional outcomes across four conversational settings: ice-breaking, negotiation, group decision, and empathy tasks. Results show that contextual cues systematically influence both personality expression and emotional tone, suggesting that the same traits are expressed differently depending on social and affective demands. This raises an important question for LLM-based dialogue agents: whether such variations reflect inconsistency or context-sensitive adaptation akin to human behavior. Viewed through the lens of Whole Trait Theory, these findings highlight that LLMs exhibit context-sensitive rather than fixed personality expression, adapting flexibly to social interaction goals and affective conditions.