🤖 AI Summary
Item-writing flaws (IWFs) lack empirical grounding in Item Response Theory (IRT) validation, hindering their integration into psychometric quality control. Method: We analyzed over 7,000 STEM multiple-choice items, automatically annotated for 19 IWF types, and fitted the two-parameter logistic (2PL) IRT model to estimate item difficulty and discrimination parameters. Cross-disciplinary regression analyses were conducted to quantify associations. Contribution/Results: We report the first systematic evidence that total IWF count significantly and negatively correlates with both difficulty and discrimination (p < 0.001), with strongest effects in life and physical sciences. Effect magnitudes vary substantially across IWF types—up to 3.2-fold (e.g., “negatively worded stems” vs. “implausible distractors”). These findings establish IWFs as a pragmatic, theory-informed heuristic for preliminary screening of low-difficulty items but confirm they cannot substitute for data-driven IRT calibration and validation.
📝 Abstract
High-quality test items are essential for educational assessments, particularly within Item Response Theory (IRT). Traditional validation methods rely on resource-intensive pilot testing to estimate item difficulty and discrimination. More recently, Item-Writing Flaw (IWF) rubrics emerged as a domain-general approach for evaluating test items based on textual features. However, their relationship to IRT parameters remains underexplored. To address this gap, we conducted a study involving over 7,000 multiple-choice questions across various STEM subjects (e.g., math and biology). Using an automated approach, we annotated each question with a 19-criteria IWF rubric and studied relationships to data-driven IRT parameters. Our analysis revealed statistically significant links between the number of IWFs and IRT difficulty and discrimination parameters, particularly in life and physical science domains. We further observed how specific IWF criteria can impact item quality more and less severely (e.g., negative wording vs. implausible distractors). Overall, while IWFs are useful for predicting IRT parameters--particularly for screening low-difficulty MCQs--they cannot replace traditional data-driven validation methods. Our findings highlight the need for further research on domain-general evaluation rubrics and algorithms that understand domain-specific content for robust item validation.