🤖 AI Summary
Current evaluations of large language models often conflate human cognitive constructs—such as reasoning or theory of mind—with benchmark performance, lacking a solid foundation in construct validity. This work introduces the psychological framework of nomological networks into the field for the first time, systematically comparing Cronbach and Meehl’s nomological network approach, Messick and Kane’s interpretive argument framework, and Borsboom’s causal framework. It advocates adopting the nomological network perspective to link theoretical model capabilities with empirical measurements. Through construct validity analysis, network modeling, and a case study on reasoning ability, the paper demonstrates that this approach enables more substantive definition and validation of LLM capability constructs without committing to strong ontological assumptions, thereby offering a new paradigm for building interpretable and verifiable evaluation systems.
📝 Abstract
Recent work in machine learning increasingly attributes human-like capabilities such as reasoning or theory of mind to large language models (LLMs) on the basis of benchmark performance. This paper examines this practice through the lens of construct validity, understood as the problem of linking theoretical capabilities to their empirical measurements. It contrasts three influential frameworks: the nomological account developed by Cronbach and Meehl, the inferential account proposed by Messick and refined by Kane, and Borsboom's causal account. I argue that the nomological account provides the most suitable foundation for current LLM capability research. It avoids the strong ontological commitments of the causal account while offering a more substantive framework for articulating construct meaning than the inferential account. I explore the conceptual implications of adopting the nomological account for LLM research through a concrete case: the assessment of reasoning capabilities in LLMs.