🤖 AI Summary
Current large language models (LLMs) lack a scientific and interpretable framework for evaluating their theory-of-mind capabilities. This study pioneers the systematic integration of classical psychometric theory into LLM assessment by combining the Technology Acceptance Model (TAM) with multidimensional validity analysis—including convergent, discriminant, predictive, and external validity—to conduct a comprehensive evaluation of GPT-3.5, GPT-4, LLaMA-2, and LLaMA-3. Results demonstrate that all models meet fundamental validity criteria, with GPT-4 and LLaMA-3 exhibiting significantly superior performance compared to their predecessors. This work establishes the feasibility of AI psychometrics and introduces a novel, interpretable paradigm for assessing the cognitive capacities of LLMs.
📝 Abstract
The immense number of parameters and deep neural networks make large language models (LLMs) rival the complexity of human brains, which also makes them opaque ``black box'' systems that are challenging to evaluate and interpret. AI Psychometrics is an emerging field that aims to tackle these challenges by applying psychometric methodologies to evaluate and interpret the psychological traits and processes of artificial intelligence (AI) systems. This paper investigates the application of AI Psychometrics to evaluate the psychological reasoning and overall psychometric validity of four prominent LLMs: GPT-3.5, GPT-4, LLaMA-2, and LLaMA-3. Using the Technology Acceptance Model (TAM), we examined convergent, discriminant, predictive, and external validity across these models. Our findings reveal that the responses from all these models generally met all validity criteria. Moreover, higher-performing models like GPT-4 and LLaMA-3 consistently demonstrated superior psychometric validity compared to their predecessors, GPT-3.5 and LLaMA-2. These results help to establish the validity of applying AI Psychometrics to evaluate and interpret large language models.