🤖 AI Summary
Existing evaluations of large language models’ (LLMs) psychometric properties suffer from a lack of standardization, cross-lingual comparability, and confounding effects from alignment mechanisms. Method: We introduce AIPsychoBench—the first dedicated benchmark for LLM psychometric assessment—featuring lightweight role-playing prompts to circumvent alignment-induced response constraints, a human psychology–grounded multilingual evaluation framework, and cross-lingual comparative analysis across 43 subcategories. Results: AIPsychoBench achieves a substantial increase in effective response rate (70.12% → 90.40%) and reduces positive and negative response biases to 3.3% and 2.1%, respectively. Crucially, it reveals for the first time systematic cross-lingual deviations of 5%–20.2% in psychometric scores for non-English languages relative to English, empirically validating language-mediated effects. This work establishes a methodological foundation and empirical evidence for modeling LLMs’ psychological attributes, ensuring fairness in psychometric evaluation, and advancing cross-cultural AI governance.
📝 Abstract
Large Language Models (LLMs) with hundreds of billions of parameters have exhibited human-like intelligence by learning from vast amounts of internet-scale data. However, the uninterpretability of large-scale neural networks raises concerns about the reliability of LLM. Studies have attempted to assess the psychometric properties of LLMs by borrowing concepts from human psychology to enhance their interpretability, but they fail to account for the fundamental differences between LLMs and humans. This results in high rejection rates when human scales are reused directly. Furthermore, these scales do not support the measurement of LLM psychological property variations in different languages. This paper introduces AIPsychoBench, a specialized benchmark tailored to assess the psychological properties of LLM. It uses a lightweight role-playing prompt to bypass LLM alignment, improving the average effective response rate from 70.12% to 90.40%. Meanwhile, the average biases are only 3.3% (positive) and 2.1% (negative), which are significantly lower than the biases of 9.8% and 6.9%, respectively, caused by traditional jailbreak prompts. Furthermore, among the total of 112 psychometric subcategories, the score deviations for seven languages compared to English ranged from 5% to 20.2% in 43 subcategories, providing the first comprehensive evidence of the linguistic impact on the psychometrics of LLM.