AIPsychoBench: Understanding the Psychometric Differences between LLMs and Humans

📅 2025-09-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing evaluations of large language models’ (LLMs) psychometric properties suffer from a lack of standardization, cross-lingual comparability, and confounding effects from alignment mechanisms. Method: We introduce AIPsychoBench—the first dedicated benchmark for LLM psychometric assessment—featuring lightweight role-playing prompts to circumvent alignment-induced response constraints, a human psychology–grounded multilingual evaluation framework, and cross-lingual comparative analysis across 43 subcategories. Results: AIPsychoBench achieves a substantial increase in effective response rate (70.12% → 90.40%) and reduces positive and negative response biases to 3.3% and 2.1%, respectively. Crucially, it reveals for the first time systematic cross-lingual deviations of 5%–20.2% in psychometric scores for non-English languages relative to English, empirically validating language-mediated effects. This work establishes a methodological foundation and empirical evidence for modeling LLMs’ psychological attributes, ensuring fairness in psychometric evaluation, and advancing cross-cultural AI governance.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) with hundreds of billions of parameters have exhibited human-like intelligence by learning from vast amounts of internet-scale data. However, the uninterpretability of large-scale neural networks raises concerns about the reliability of LLM. Studies have attempted to assess the psychometric properties of LLMs by borrowing concepts from human psychology to enhance their interpretability, but they fail to account for the fundamental differences between LLMs and humans. This results in high rejection rates when human scales are reused directly. Furthermore, these scales do not support the measurement of LLM psychological property variations in different languages. This paper introduces AIPsychoBench, a specialized benchmark tailored to assess the psychological properties of LLM. It uses a lightweight role-playing prompt to bypass LLM alignment, improving the average effective response rate from 70.12% to 90.40%. Meanwhile, the average biases are only 3.3% (positive) and 2.1% (negative), which are significantly lower than the biases of 9.8% and 6.9%, respectively, caused by traditional jailbreak prompts. Furthermore, among the total of 112 psychometric subcategories, the score deviations for seven languages compared to English ranged from 5% to 20.2% in 43 subcategories, providing the first comprehensive evidence of the linguistic impact on the psychometrics of LLM.
Problem

Research questions and friction points this paper is trying to address.

Assessing psychological differences between LLMs and humans using human scales
Addressing low response rates when evaluating LLM psychometric properties directly
Measuring psychological property variations of LLMs across different languages
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lightweight role-playing prompts bypass LLM alignment
Reduces biases compared to traditional jailbreak prompts
Measures psychometric variations across seven languages
🔎 Similar Papers
No similar papers found.
W
Wei Xie
College of Computer Science and Technology, National University of Defense Technology
S
Shuoyoucheng Ma
College of Computer Science and Technology, National University of Defense Technology
Z
Zhenhua Wang
College of Computer Science and Technology, National University of Defense Technology
E
Enze Wang
College of Computer Science and Technology, National University of Defense Technology
K
Kai Chen
Institute of Information Engineering, Chinese Academy of Sciences
Xiaobing Sun
Xiaobing Sun
Yangzhou University
Software EngineeringSoftware Data Analytics
B
Baosheng Wang