🤖 AI Summary
This study systematically evaluates multidimensional biases—geographic, demographic, and socioeconomic—of large language models (LLMs) in academic recommendation tasks. We propose the first fairness-aware evaluation framework tailored to educational recommendation, moving beyond conventional accuracy metrics to quantify representation bias in university and major recommendations, imbalance in Global North–South institutional coverage, and gender stereotyping. Using LLaMA-3.1-8B, Gemma-7B, and Mistral-7B, we generate over 25,000 recommendations for 360 simulated users characterized by diverse gender, nationality, and socioeconomic backgrounds. Results reveal pronounced systemic biases: strong preference for Global North institutions, reinforcement of gender stereotypes, and high recommendation redundancy. Although LLaMA-3.1 achieves the broadest coverage (481 universities across 58 countries), it still exhibits significant inequities. Our work establishes a reproducible, empirically grounded assessment paradigm to advance fair governance of LLMs in education.
📝 Abstract
Large Language Models (LLMs) are increasingly used as daily recommendation systems for tasks like education planning, yet their recommendations risk perpetuating societal biases. This paper empirically examines geographic, demographic, and economic biases in university and program suggestions from three open-source LLMs: LLaMA-3.1-8B, Gemma-7B, and Mistral-7B. Using 360 simulated user profiles varying by gender, nationality, and economic status, we analyze over 25,000 recommendations. Results show strong biases: institutions in the Global North are disproportionately favored, recommendations often reinforce gender stereotypes, and institutional repetition is prevalent. While LLaMA-3.1 achieves the highest diversity, recommending 481 unique universities across 58 countries, systemic disparities persist. To quantify these issues, we propose a novel, multi-dimensional evaluation framework that goes beyond accuracy by measuring demographic and geographic representation. Our findings highlight the urgent need for bias consideration in educational LMs to ensure equitable global access to higher education.