The Impact of Item-Writing Flaws on Difficulty and Discrimination in Item Response Theory

📅 2025-03-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Item-writing flaws (IWFs) lack empirical grounding in Item Response Theory (IRT) validation, hindering their integration into psychometric quality control. Method: We analyzed over 7,000 STEM multiple-choice items, automatically annotated for 19 IWF types, and fitted the two-parameter logistic (2PL) IRT model to estimate item difficulty and discrimination parameters. Cross-disciplinary regression analyses were conducted to quantify associations. Contribution/Results: We report the first systematic evidence that total IWF count significantly and negatively correlates with both difficulty and discrimination (p < 0.001), with strongest effects in life and physical sciences. Effect magnitudes vary substantially across IWF types—up to 3.2-fold (e.g., “negatively worded stems” vs. “implausible distractors”). These findings establish IWFs as a pragmatic, theory-informed heuristic for preliminary screening of low-difficulty items but confirm they cannot substitute for data-driven IRT calibration and validation.

Technology Category

Application Category

📝 Abstract
High-quality test items are essential for educational assessments, particularly within Item Response Theory (IRT). Traditional validation methods rely on resource-intensive pilot testing to estimate item difficulty and discrimination. More recently, Item-Writing Flaw (IWF) rubrics emerged as a domain-general approach for evaluating test items based on textual features. However, their relationship to IRT parameters remains underexplored. To address this gap, we conducted a study involving over 7,000 multiple-choice questions across various STEM subjects (e.g., math and biology). Using an automated approach, we annotated each question with a 19-criteria IWF rubric and studied relationships to data-driven IRT parameters. Our analysis revealed statistically significant links between the number of IWFs and IRT difficulty and discrimination parameters, particularly in life and physical science domains. We further observed how specific IWF criteria can impact item quality more and less severely (e.g., negative wording vs. implausible distractors). Overall, while IWFs are useful for predicting IRT parameters--particularly for screening low-difficulty MCQs--they cannot replace traditional data-driven validation methods. Our findings highlight the need for further research on domain-general evaluation rubrics and algorithms that understand domain-specific content for robust item validation.
Problem

Research questions and friction points this paper is trying to address.

Explores relationship between Item-Writing Flaws and IRT parameters.
Assesses impact of IWFs on difficulty and discrimination in STEM subjects.
Evaluates effectiveness of IWF rubrics versus traditional validation methods.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automated annotation using 19-criteria IWF rubric
Analyzed links between IWFs and IRT parameters
Identified specific IWF criteria impacting item quality
🔎 Similar Papers
No similar papers found.
Robin Schmucker
Robin Schmucker
JP Morgan Chase
Machine LearningNatural Language ProcessingHuman-AI Interaction
S
Steven Moore
Human-Computer Interaction, Carnegie Mellon University