🤖 AI Summary
Existing evaluations of large language models’ (LLMs) personality inference capabilities lack ecological validity, particularly in authentic conversational settings. Method: We introduce the first benchmark dataset integrating semi-structured interview transcripts with continuous Big Five personality scores, and systematically evaluate multiple paradigms—including zero-shot and chain-of-thought prompting, LoRA-finetuned RoBERTa and LLaMA, and regression on static embeddings (BERT, text-embedding-3-small). Contribution/Results: All models achieve Pearson correlations ≤0.26 with ground-truth personality scores—well below psychometrically acceptable thresholds—and chain-of-thought prompting yields no significant improvement. This work constitutes the first assessment of LLMs’ alignment with psychological constructs using real-world interviews and fine-grained, continuous personality annotations. It reveals fundamental limitations in current models’ capacity for reliable personality inference, establishing a novel, ecologically grounded benchmark and a critical framework for advancing trustworthy personality computation.
📝 Abstract
Large Language Models (LLMs) are increasingly deployed in roles requiring nuanced psychological understanding, such as emotional support agents, counselors, and decision-making assistants. However, their ability to interpret human personality traits, a critical aspect of such applications, remains unexplored, particularly in ecologically valid conversational settings. While prior work has simulated LLM "personas" using discrete Big Five labels on social media data, the alignment of LLMs with continuous, ground-truth personality assessments derived from natural interactions is largely unexamined. To address this gap, we introduce a novel benchmark comprising semi-structured interview transcripts paired with validated continuous Big Five trait scores. Using this dataset, we systematically evaluate LLM performance across three paradigms: (1) zero-shot and chain-of-thought prompting with GPT-4.1 Mini, (2) LoRA-based fine-tuning applied to both RoBERTa and Meta-LLaMA architectures, and (3) regression using static embeddings from pretrained BERT and OpenAI's text-embedding-3-small. Our results reveal that all Pearson correlations between model predictions and ground-truth personality traits remain below 0.26, highlighting the limited alignment of current LLMs with validated psychological constructs. Chain-of-thought prompting offers minimal gains over zero-shot, suggesting that personality inference relies more on latent semantic representation than explicit reasoning. These findings underscore the challenges of aligning LLMs with complex human attributes and motivate future work on trait-specific prompting, context-aware modeling, and alignment-oriented fine-tuning.