🤖 AI Summary
This study systematically evaluates fairness risks in large language models (LLMs) acting as “teachers” in personalized education, focusing on bias in educational content generation and selection across multidimensional demographic attributes—including race, ethnicity, gender, gender identity, disability status, income level, and national origin.
Method: We introduce two novel quantitative metrics—Mean Absolute Bias (MAB) and Maximum Difference Bias (MDB)—enabling the first rigorous, scenario-specific measurement of LLM bias in education. A large-scale, controlled evaluation is conducted across nine state-of-the-art open- and closed-source LLMs using over 17,000 educationally diverse, cross-disciplinary explanatory samples spanning multiple difficulty levels.
Contribution/Results: All evaluated frontier models exhibit statistically significant fairness risks, with the strongest disparities observed along income and disability dimensions (highest MDB), and comparatively lower—but still non-negligible—bias along gender and racial lines. Critically, models simultaneously reinforce and invert harmful stereotypes, revealing a dual-harm mechanism that undermines equitable pedagogy.
📝 Abstract
With the increasing adoption of large language models (LLMs) in education, concerns about inherent biases in these models have gained prominence. We evaluate LLMs for bias in the personalized educational setting, specifically focusing on the models' roles as"teachers."We reveal significant biases in how models generate and select educational content tailored to different demographic groups, including race, ethnicity, sex, gender, disability status, income, and national origin. We introduce and apply two bias score metrics--Mean Absolute Bias (MAB) and Maximum Difference Bias (MDB)--to analyze 9 open and closed state-of-the-art LLMs. Our experiments, which utilize over 17,000 educational explanations across multiple difficulty levels and topics, uncover that models potentially harm student learning by both perpetuating harmful stereotypes and reversing them. We find that bias is similar for all frontier models, with the highest MAB along income levels while MDB is highest relative to both income and disability status. For both metrics, we find the lowest bias exists for sex/gender and race/ethnicity.