Evaluating LLM Alignment on Personality Inference from Real-World Interview Data

📅 2025-09-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing evaluations of large language models’ (LLMs) personality inference capabilities lack ecological validity, particularly in authentic conversational settings. Method: We introduce the first benchmark dataset integrating semi-structured interview transcripts with continuous Big Five personality scores, and systematically evaluate multiple paradigms—including zero-shot and chain-of-thought prompting, LoRA-finetuned RoBERTa and LLaMA, and regression on static embeddings (BERT, text-embedding-3-small). Contribution/Results: All models achieve Pearson correlations ≤0.26 with ground-truth personality scores—well below psychometrically acceptable thresholds—and chain-of-thought prompting yields no significant improvement. This work constitutes the first assessment of LLMs’ alignment with psychological constructs using real-world interviews and fine-grained, continuous personality annotations. It reveals fundamental limitations in current models’ capacity for reliable personality inference, establishing a novel, ecologically grounded benchmark and a critical framework for advancing trustworthy personality computation.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) are increasingly deployed in roles requiring nuanced psychological understanding, such as emotional support agents, counselors, and decision-making assistants. However, their ability to interpret human personality traits, a critical aspect of such applications, remains unexplored, particularly in ecologically valid conversational settings. While prior work has simulated LLM "personas" using discrete Big Five labels on social media data, the alignment of LLMs with continuous, ground-truth personality assessments derived from natural interactions is largely unexamined. To address this gap, we introduce a novel benchmark comprising semi-structured interview transcripts paired with validated continuous Big Five trait scores. Using this dataset, we systematically evaluate LLM performance across three paradigms: (1) zero-shot and chain-of-thought prompting with GPT-4.1 Mini, (2) LoRA-based fine-tuning applied to both RoBERTa and Meta-LLaMA architectures, and (3) regression using static embeddings from pretrained BERT and OpenAI's text-embedding-3-small. Our results reveal that all Pearson correlations between model predictions and ground-truth personality traits remain below 0.26, highlighting the limited alignment of current LLMs with validated psychological constructs. Chain-of-thought prompting offers minimal gains over zero-shot, suggesting that personality inference relies more on latent semantic representation than explicit reasoning. These findings underscore the challenges of aligning LLMs with complex human attributes and motivate future work on trait-specific prompting, context-aware modeling, and alignment-oriented fine-tuning.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLM alignment with real personality inference from interviews
Assessing LLM ability to interpret human traits in conversational settings
Measuring alignment between LLM predictions and continuous personality assessments
Innovation

Methods, ideas, or system contributions that make the work stand out.

LoRA-based fine-tuning on RoBERTa and LLaMA
Chain-of-thought prompting with GPT-4.1 Mini
Regression using BERT and OpenAI embeddings
🔎 Similar Papers
No similar papers found.
J
Jianfeng Zhu
Department of Computer Science, Kent State University
J
Julina Maharjan
Department of Computer Science, Kent State University
X
Xinyu Li
Department of Computer Science, Kent State University
K
Karin G. Coifman
Department of Psychological Sciences, Kent State University
Ruoming Jin
Ruoming Jin
Professor of Computer Science, Kent State University
Big DataDeep LearningGraph AnalyticsGraph DatabaseData Mining