🤖 AI Summary
This study addresses the challenge of enabling large language models (LLMs) to simultaneously maintain personality consistency and adapt to contextual demands. It introduces, for the first time, a systematic integration of Jungian psychological type theory into LLM-based personality modeling, proposing a tripartite architecture comprising dominant-auxiliary coordination, reinforcement-compensation, and reflective mechanisms. This framework supports stable personality expression, dynamic contextual adaptation, and long-term personality evolution. Personality alignment is evaluated using the Myers-Briggs Type Indicator (MBTI), while dynamic regulation is achieved through reinforcement learning coupled with reflective processes. Experimental results demonstrate that the proposed approach significantly enhances the naturalness and authenticity of agent interactions across diverse and challenging scenarios, effectively balancing personality consistency with contextual adaptability.
📝 Abstract
Large Language Models (LLMs) are increasingly shaping human-computer interaction (HCI), from personalized assistants to social simulations. Beyond language competence, researchers are exploring whether LLMs can exhibit human-like characteristics that influence engagement, decision-making, and perceived realism. Personality, in particular, is critical, yet existing approaches often struggle to achieve both nuanced and adaptable expression. We present a framework that models LLM personality via Jungian psychological types, integrating three mechanisms: a dominant-auxiliary coordination mechanism for coherent core expression, a reinforcement-compensation mechanism for temporary adaptation to context, and a reflection mechanism that drives long-term personality evolution. This design allows the agent to maintain nuanced traits while dynamically adjusting to interaction demands and gradually updating its underlying structure. Personality alignment is evaluated using Myers-Briggs Type Indicator questionnaires and tested under diverse challenge scenarios as a preliminary structured assessment. Findings suggest that evolving, personality-aware LLMs can support coherent, context-sensitive interactions, enabling naturalistic agent design in HCI.