π€ AI Summary
This study investigates algorithmic bias in automated scoring systems for science writing by English language learners (ELLs) in middle school, focusing on fairness implications arising from imbalanced ELL representation in training data. We construct a multi-scale ELL response dataset and employ fine-tuned BERT models for scoring. Using Friedman tests and a newly proposed Mean Score Gap (MSG) differential analysis, we systematically quantify the threshold effect of ELL sample size in training dataβmarking the first such empirical characterization. Results show that AI scoring exhibits no statistically significant bias when ELL samples constitute β₯1,000 instances in training data, whereas a significant scoring disparity emerges with as few as 200 ELL samples. These findings identify minimum data scale as a critical threshold for ensuring equitable scoring outcomes for ELLs. The study further contributes a reproducible evaluation framework and empirically grounded benchmarks to guide fairness-aware design of educational AI systems.
π Abstract
This study investigated potential scoring biases and disparities toward English Language Learners (ELLs) when using automatic scoring systems for middle school students' written responses to science assessments. We specifically focus on examining how unbalanced training data with ELLs contributes to scoring bias and disparities. We fine-tuned BERT with four datasets: responses from (1) ELLs, (2) non-ELLs, (3) a mixed dataset reflecting the real-world proportion of ELLs and non-ELLs (unbalanced), and (4) a balanced mixed dataset with equal representation of both groups. The study analyzed 21 assessment items: 10 items with about 30,000 ELL responses, five items with about 1,000 ELL responses, and six items with about 200 ELL responses. Scoring accuracy (Acc) was calculated and compared to identify bias using Friedman tests. We measured the Mean Score Gaps (MSGs) between ELLs and non-ELLs and then calculated the differences in MSGs generated through both the human and AI models to identify the scoring disparities. We found that no AI bias and distorted disparities between ELLs and non-ELLs were found when the training dataset was large enough (ELL = 30,000 and ELL = 1,000), but concerns could exist if the sample size is limited (ELL = 200).