🤖 AI Summary
Existing methods struggle to scale psychological assessment of latent constructs—such as personality, emotion, and bias—in large language models (LLMs), hindering transparency and controllability. To address this, we introduce, for the first time, classical psychometric principles into LLM analysis, proposing a Natural Language Inference (NLI)-based framework for reconstructing psychological scales. This framework operationalizes clinical constructs (e.g., anxiety, depression) into generalizable, prompt-based assessments, enabling cross-model psychological evaluation across 88 open-source LLMs. Our approach integrates statistical modeling and correlation analysis to validate high consistency between model responses and established human psychological theories. It uncovers interpretable bias patterns and supports theory-grounded model calibration. We publicly release an open-source evaluation toolkit, establishing a novel, psychometrically principled paradigm for trustworthy LLM assessment.
📝 Abstract
Human-like personality traits have recently been discovered in large language models, raising the hypothesis that their (known and as yet undiscovered) biases conform with human latent psychological constructs. While large conversational models may be tricked into answering psychometric questionnaires, the latent psychological constructs of thousands of simpler transformers, trained for other tasks, cannot be assessed because appropriate psychometric methods are currently lacking. Here, we show how standard psychological questionnaires can be reformulated into natural language inference prompts, and we provide a code library to support the psychometric assessment of arbitrary models. We demonstrate, using a sample of 88 publicly available models, the existence of human-like mental health-related constructs (including anxiety, depression, and Sense of Coherence) which conform with standard theories in human psychology and show similar correlations and mitigation strategies. The ability to interpret and rectify the performance of language models by using psychological tools can boost the development of more explainable, controllable, and trustworthy models.