Stable Personas: Dual-Assessment of Temporal Stability in LLM-Based Human Simulation

📅 2026-01-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of inconsistent personality expression in large language models (LLMs) during human-like behavioral simulation, which undermines their credibility in social behavior research. The authors propose a dual-perspective framework integrating self-report and observer-based assessments, revealing for the first time that while LLMs exhibit highly stable self-reported personalities, their externally observable personality expressions significantly decay over extended dialogues. Drawing on 3,473 cross-session and 1,370 within-session interactions across seven mainstream LLMs, three semantically equivalent prompts, and four ADHD-related personality profiles, the work systematically identifies boundary conditions under which personality expression deteriorates. These findings provide critical empirical grounding for developing more reliable LLM-based social simulations.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) acting as artificial agents offer the potential for scalable behavioral research, yet their validity depends on whether LLMs can maintain stable personas across extended conversations. We address this point using a dual-assessment framework measuring both self-reported characteristics and observer-rated persona expression. Across two experiments testing four persona conditions (default, high, moderate, and low ADHD presentations), seven LLMs, and three semantically equivalent persona prompts, we examine between-conversation stability (3,473 conversations) and within-conversation stability (1,370 conversations and 18 turns). Self-reports remain highly stable both between and within conversations. However, observer ratings reveal a tendency for persona expressions to decline during extended conversations. These findings suggest that persona-instructed LLMs produce stable, persona-aligned self-reports, an important prerequisite for behavioral research, while identifying this regression tendency as a boundary condition for multi-agent social simulation.
Problem

Research questions and friction points this paper is trying to address.

persona stability
large language models
behavioral simulation
temporal consistency
human-like agents
Innovation

Methods, ideas, or system contributions that make the work stand out.

temporal stability
persona consistency
dual-assessment framework
LLM-based human simulation
behavioral research
🔎 Similar Papers
No similar papers found.
J
Jana Gonnermann-Muller
Zuse Institute Berlin, Berlin, Germany; Weizenbaum Institute, Berlin, Germany
Jennifer Haase
Jennifer Haase
Research Associate, Humboldt-University Berlin
CreativityGenAI collaborationHuman Automation Interaction
N
Nicolas Leins
Zuse Institute Berlin, Berlin, Germany
Thomas Kosch
Thomas Kosch
Junior Professor of Computer Science, Humboldt University of Berlin
Human-AI InteractionHuman AugmentationUser Sensing and InferenceMeta HCI Research
S
S. Pokutta
Zuse Institute Berlin, Berlin, Germany; Technical University Berlin, Berlin, Germany