🤖 AI Summary
This study addresses the overreliance on subjective self-reports in measuring cognitive load during educational tasks. We propose item difficulty parameters—derived from Item Response Theory (IRT) modeling of online learning platform data—as an objective proxy for intrinsic cognitive load. By systematically aligning IRT-estimated difficulty with the intrinsic load dimension of cognitive load theory, we empirically validate its theoretical coherence and psychometric utility. Results demonstrate a significant positive correlation between item difficulty and intrinsic cognitive load, with observed trends consistent with theoretical predictions; moreover, this metric effectively mitigates self-report bias, enhancing measurement objectivity and generalizability. To our knowledge, this is the first empirical validation of IRT difficulty parameters as valid indicators of intrinsic cognitive load in interactive learning contexts—such as educational games—thereby establishing a practical, reproducible, and theory-grounded approach to objective cognitive load assessment in educational technology research and practice.
📝 Abstract
Cognitive load is key to ensuring an optimal learning experience. However, measuring the cognitive load of educational tasks typically relies on self-report measures which has been criticized by researchers for being subjective. In this study, we investigated the feasibility of using item difficulty parameters as a proxy for measuring cognitive load in an online learning platform. Difficulty values that were derived using item-response theory were consistent with theories of how intrinsic and extraneous load contribute to cognitive load. This finding suggests that we can use item difficulty to represent intrinsic load when modelling cognitive load in learning games.