From Five Dimensions to Many: Large Language Models as Precise and Interpretable Psychological Profilers

📅 2025-11-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether large language models (LLMs) can accurately reconstruct the high-order correlational structure among human psychological traits—and cross-scale predict responses to nine additional psychological inventories—using only brief inputs from the Five-Factor Model (FFM). Method: We propose a two-stage reasoning framework: first compressing raw FFM inputs into natural-language personality summaries, then generating target-scale responses conditioned on these summaries to explicitly capture trait-level synergistic patterns. We employ zero-shot prompting and reasoning trace analysis without parameter fine-tuning. Contribution/Results: Experiments show that LLM-inferred correlation structures align closely with empirical human data (R² > 0.89), significantly outperforming semantic similarity baselines and approaching the performance of supervised machine learning models. This work provides the first systematic evidence that LLMs possess zero-shot abstract modeling and interpretable reasoning capabilities over latent human psychological structure.

Technology Category

Application Category

📝 Abstract
Psychological constructs within individuals are widely believed to be interconnected. We investigated whether and how Large Language Models (LLMs) can model the correlational structure of human psychological traits from minimal quantitative inputs. We prompted various LLMs with Big Five Personality Scale responses from 816 human individuals to role-play their responses on nine other psychological scales. LLMs demonstrated remarkable accuracy in capturing human psychological structure, with the inter-scale correlation patterns from LLM-generated responses strongly aligning with those from human data $(R^2>0.89)$. This zero-shot performance substantially exceeded predictions based on semantic similarity and approached the accuracy of machine learning algorithms trained directly on the dataset. Analysis of reasoning traces revealed that LLMs use a systematic two-stage process: First, they transform raw Big Five responses into natural language personality summaries through information selection and compression, analogous to generating sufficient statistics. Second, they generate target scale responses based on reasoning from these summaries. For information selection, LLMs identify the same key personality factors as trained algorithms, though they fail to differentiate item importance within factors. The resulting compressed summaries are not merely redundant representations but capture synergistic information--adding them to original scores enhances prediction alignment, suggesting they encode emergent, second-order patterns of trait interplay. Our findings demonstrate that LLMs can precisely predict individual participants'psychological traits from minimal data through a process of abstraction and reasoning, offering both a powerful tool for psychological simulation and valuable insights into their emergent reasoning capabilities.
Problem

Research questions and friction points this paper is trying to address.

Modeling human psychological trait correlations using minimal quantitative inputs
Predicting individual psychological traits from Big Five Personality Scale responses
Investigating LLMs' reasoning process for psychological profiling accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs model psychological traits from minimal inputs
Two-stage process: personality summary then reasoning
Compressed summaries capture synergistic trait information
🔎 Similar Papers
No similar papers found.
Y
Yi-Fei Liu
Peking-Tsinghua Center for Life Sciences, Peking University
Yi-Long Lu
Yi-Long Lu
Peking University
decision makingproblem solvingcomputational modeling
D
Di He
National Key Laboratory of General Artificial Intelligence, Peking University
H
Hang Zhang
School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University