🤖 AI Summary
In large-scale international educational assessments (e.g., PISA), linguistic, cultural, and curricular differences frequently induce differential item functioning (DIF), severely biasing group ability distributions and rankings estimated via conventional item response theory (IRT). Existing DIF detection and calibration methods rely on unrealistic assumptions—such as the existence of a well-defined reference group or invariant anchor items—compromising statistical consistency.
Method: We propose a reference-group- and anchor-item-free multi-group DIF-robust calibration framework, integrating high-dimensional statistical inference with nonconvex optimization to jointly estimate item and group parameters across populations.
Contribution/Results: Our approach is the first to theoretically guarantee asymptotically consistent recovery of cross-group ability ordering under arbitrary DIF. Rigorous asymptotic analysis establishes its statistical validity. Empirical validation on PISA 2022 mathematics, science, and reading data demonstrates substantial correction of ability distribution bias and yields more reliable national rankings.
📝 Abstract
With the process of globalization, International Large-scale Assessments in education (ILSAs), such as the Programme for International Student Assessment (PISA), have become increasingly important in educational research and policy-making. They collect valuable data on education quality and performance development across many education systems worldwide, allowing countries to share techniques and policies that have proven efficient and successful. A key to analyzing ILSA data is an Item Response Theory (IRT) model, which is used to estimate the performance distributions of different groups (e.g., countries) and then produce a ranking. A major challenge in calibrating the IRT model is that some items suffer from Differential Item Functioning (DIF), i.e., different groups have different probabilities of correctly answering the items after controlling for individual proficiency levels. DIF is particularly common in ILSA due to the differences in test languages, cultural contexts, and curriculum designs across different groups. Ignoring or improperly accounting for DIF when calibrating the IRT model can lead to severe biases in the estimated performance distributions, which may further distort the ranking of the groups. Unfortunately, existing methods cannot guarantee the statistically consistent recovery of the group ranking without unrealistic assumptions for ILSA, such as the existence and knowledge of reference groups and anchor items. To fill this gap, this paper proposes a new approach to DIF analysis across multiple groups. This approach is computationally efficient and statistically consistent, without making strong assumptions about reference groups and anchor items. The proposed method is applied to PISA 2022 data from the mathematics, science, and reading domains, providing insights into their DIF structures and performance rankings of countries.