🤖 AI Summary
This study investigates systematic inconsistencies between self-reported personality traits and actual behavioral outputs in large language models (LLMs). Methodologically, it tracks trait dynamics throughout training, evaluates the predictive validity of self-reports across diverse behavioral tasks, and conducts multi-dimensional assessments via role prompting, reinforcement learning from human feedback (RLHF), and instruction fine-tuning. Key contributions include: (1) instruction alignment improves internal consistency of self-reports but fails to enhance their behavioral predictive validity; (2) role injection effectively manipulates self-reported traits yet exerts minimal influence on underlying behavioral patterns; and (3) a robust dissociation between self-reports and behavior challenges the implicit assumption that LLM “personality” constitutes a psychologically valid construct analogous to human personality. Collectively, these findings call for a paradigm shift in LLM personality assessment—centering empirical behavioral validation rather than relying on introspective or linguistic self-characterizations.
📝 Abstract
Personality traits have long been studied as predictors of human behavior.Recent advances in Large Language Models (LLMs) suggest similar patterns may emerge in artificial systems, with advanced LLMs displaying consistent behavioral tendencies resembling human traits like agreeableness and self-regulation. Understanding these patterns is crucial, yet prior work primarily relied on simplified self-reports and heuristic prompting, with little behavioral validation. In this study, we systematically characterize LLM personality across three dimensions: (1) the dynamic emergence and evolution of trait profiles throughout training stages; (2) the predictive validity of self-reported traits in behavioral tasks; and (3) the impact of targeted interventions, such as persona injection, on both self-reports and behavior. Our findings reveal that instructional alignment (e.g., RLHF, instruction tuning) significantly stabilizes trait expression and strengthens trait correlations in ways that mirror human data. However, these self-reported traits do not reliably predict behavior, and observed associations often diverge from human patterns. While persona injection successfully steers self-reports in the intended direction, it exerts little or inconsistent effect on actual behavior. By distinguishing surface-level trait expression from behavioral consistency, our findings challenge assumptions about LLM personality and underscore the need for deeper evaluation in alignment and interpretability.