🤖 AI Summary
Large language models (LLMs) frequently exhibit out-of-character (OOC) behavior in open-ended generation—deviating from predefined persona traits—thereby undermining reliability. Existing evaluation methods, relying on coarse-grained global scoring, fail to detect fine-grained persona inconsistencies.
Method: We propose the first atomic-level persona fidelity evaluation framework, introducing three computable metrics: (1) single-turn persona alignment, (2) cross-turn consistency, and (3) task-persona coupling effect. The framework integrates fine-grained human annotation, contrastive analysis, and human perception modeling to enable stable, multi-task, multi-persona quantification.
Contribution/Results: Our approach significantly improves detection of latent OOC phenomena, achieving more precise and robust persona consistency assessment across diverse generation scenarios. Experimental results demonstrate superior sensitivity to subtle deviations compared to prior global metrics, enabling reliable persona-aware LLM evaluation.
📝 Abstract
Ensuring persona fidelity in large language models (LLMs) is essential for maintaining coherent and engaging human-AI interactions. However, LLMs often exhibit Out-of-Character (OOC) behavior, where generated responses deviate from an assigned persona, leading to inconsistencies that affect model reliability. Existing evaluation methods typically assign single scores to entire responses, struggling to capture subtle persona misalignment, particularly in long-form text generation. To address this limitation, we propose an atomic-level evaluation framework that quantifies persona fidelity at a finer granularity. Our three key metrics measure the degree of persona alignment and consistency within and across generations. Our approach enables a more precise and realistic assessment of persona fidelity by identifying subtle deviations that real users would encounter. Through our experiments, we demonstrate that our framework effectively detects persona inconsistencies that prior methods overlook. By analyzing persona fidelity across diverse tasks and personality types, we reveal how task structure and persona desirability influence model adaptability, highlighting challenges in maintaining consistent persona expression.