Efficient Detection of Bad Benchmark Items with Novel Scalability Coefficients

📅 2026-03-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the widespread presence of psychometrically unvalidated poor-quality items—such as those with incorrect answers, ambiguous wording, or misaligned objectives—in large-scale AI evaluations and human assessments, which undermines measurement validity. To tackle this issue, the authors propose a novel nonparametric scalability coefficient method based on inter-item isotonic regression, introducing the signed isotonic R² metric. This approach efficiently identifies globally problematic items without relying on linear assumptions or parametric item response models. By integrating Kendall’s τ for directional consistency, pairwise isotonic fitting, and scalability aggregation, the method achieves both directional coherence and optimal signal extraction. It is lightweight, model-agnostic, and well-suited for small-sample, high-dimensional, and mixed-item-type scenarios. Empirical evaluations on HS Math, GSM8K, MMLU, and two human assessment datasets demonstrate that its AUC performance in ranking poor-quality items matches or exceeds that of classical test theory, item response theory, and dimensional diagnostic approaches.

Technology Category

Application Category

📝 Abstract
The validity of assessments, from large-scale AI benchmarks to human classrooms, depends on the quality of individual items, yet modern evaluation instruments often contain thousands of items with minimal psychometric vetting. We introduce a new family of nonparametric scalability coefficients based on interitem isotonic regression for efficiently detecting globally bad items (e.g., miskeyed, ambiguously worded, or construct-misaligned). The central contribution is the signed isotonic $R^2$, which measures the maximal proportion of variance in one item explainable by a monotone function of another while preserving the direction of association via Kendall's $τ$. Aggregating these pairwise coefficients yields item-level scores that sharply separate problematic items from acceptable ones without assuming linearity or committing to a parametric item response model. We show that the signed isotonic $R^2$ is extremal among monotone predictors (it extracts the strongest possible monotone signal between any two items) and show that this optimality property translates directly into practical screening power. Across three AI benchmark datasets (HS Math, GSM8K, MMLU) and two human assessment datasets, the signed isotonic $R^2$ consistently achieves top-tier AUC for ranking bad items above good ones, outperforming or matching a comprehensive battery of classical test theory, item response theory, and dimensionality-based diagnostics. Crucially, the method remains robust under the small-n/large-p conditions typical of AI evaluation, requires only bivariate monotone fits computable in seconds, and handles mixed item types (binary, ordinal, continuous) without modification. It is a lightweight, model-agnostic filter that can materially reduce the reviewer effort needed to find flawed items in modern large-scale evaluation regimes.
Problem

Research questions and friction points this paper is trying to address.

bad item detection
scalability coefficients
AI benchmarking
psychometric vetting
large-scale assessment
Innovation

Methods, ideas, or system contributions that make the work stand out.

signed isotonic R²
nonparametric scalability
interitem isotonic regression
bad item detection
model-agnostic evaluation
🔎 Similar Papers
2024-06-26Conference on Empirical Methods in Natural Language ProcessingCitations: 0
M
Michael Hardy
Stanford University, CA, United States
Joshua Gilbert
Joshua Gilbert
Harvard University
EducationQuantitative MethodsStatisticsPsychometricsMusic
B
Benjamin Domingue
Stanford University, CA, United States