🤖 AI Summary
Current evaluations of large language models (LLMs) lack the theoretical rigor demanded by psychology and cognitive science. To address this gap, this work proposes and implements PsyCogMetrics AI Lab—the first LLM evaluation platform grounded in psychometric and cognitive science theories. Built upon a triple-cycle action design research paradigm—comprising relating, rigorous, and design cycles—the platform integrates Popperian falsifiability, classical test theory, and cognitive load theory through a nested construct–intervention–evaluation iterative mechanism. Deployed on a cloud-based architecture, PsyCogMetrics AI Lab provides a theoretically driven, reliable, and scalable evaluation infrastructure for interdisciplinary research at the intersection of artificial intelligence and behavioral science.
📝 Abstract
This study presents the development of the PsyCogMetrics AI Lab (psycogmetrics.ai), an integrated, cloud-based platform that operationalizes psychometric and cognitive-science methodologies for Large Language Model (LLM) evaluation. Framed as a three-cycle Action Design Science study, the Relevance Cycle identifies key limitations in current evaluation methods and unfulfilled stakeholder needs. The Rigor Cycle draws on kernel theories such as Popperian falsifiability, Classical Test Theory, and Cognitive Load Theory to derive deductive design objectives. The Design Cycle operationalizes these objectives through nested Build-Intervene-Evaluate loops. The study contributes a novel IT artifact, a validated design for LLM evaluation, benefiting research at the intersection of AI, psychology, cognitive science, and the social and behavioral sciences.