LLMs are Biased Teachers: Evaluating LLM Bias in Personalized Education

📅 2024-10-17
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically evaluates fairness risks in large language models (LLMs) acting as “teachers” in personalized education, focusing on bias in educational content generation and selection across multidimensional demographic attributes—including race, ethnicity, gender, gender identity, disability status, income level, and national origin. Method: We introduce two novel quantitative metrics—Mean Absolute Bias (MAB) and Maximum Difference Bias (MDB)—enabling the first rigorous, scenario-specific measurement of LLM bias in education. A large-scale, controlled evaluation is conducted across nine state-of-the-art open- and closed-source LLMs using over 17,000 educationally diverse, cross-disciplinary explanatory samples spanning multiple difficulty levels. Contribution/Results: All evaluated frontier models exhibit statistically significant fairness risks, with the strongest disparities observed along income and disability dimensions (highest MDB), and comparatively lower—but still non-negligible—bias along gender and racial lines. Critically, models simultaneously reinforce and invert harmful stereotypes, revealing a dual-harm mechanism that undermines equitable pedagogy.

Technology Category

Application Category

📝 Abstract
With the increasing adoption of large language models (LLMs) in education, concerns about inherent biases in these models have gained prominence. We evaluate LLMs for bias in the personalized educational setting, specifically focusing on the models' roles as"teachers."We reveal significant biases in how models generate and select educational content tailored to different demographic groups, including race, ethnicity, sex, gender, disability status, income, and national origin. We introduce and apply two bias score metrics--Mean Absolute Bias (MAB) and Maximum Difference Bias (MDB)--to analyze 9 open and closed state-of-the-art LLMs. Our experiments, which utilize over 17,000 educational explanations across multiple difficulty levels and topics, uncover that models potentially harm student learning by both perpetuating harmful stereotypes and reversing them. We find that bias is similar for all frontier models, with the highest MAB along income levels while MDB is highest relative to both income and disability status. For both metrics, we find the lowest bias exists for sex/gender and race/ethnicity.
Problem

Research questions and friction points this paper is trying to address.

Assessing LLM bias in education
Evaluating demographic biases in LLMs
Measuring bias impact on student learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluates LLM bias in education
Introduces Mean Absolute Bias metric
Analyzes 17,000 educational explanations
🔎 Similar Papers
No similar papers found.