Does the Prompt-based Large Language Model Recognize Students' Demographics and Introduce Bias in Essay Scoring?

📅 2025-04-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether prompt-based large language models (LLMs), such as GPT-4o, implicitly infer student demographic attributes—specifically first-language background and gender—during automated essay scoring (AES), thereby introducing systematic scoring bias. Employing prompt engineering, interpretability analysis of text embeddings, multivariate regression modeling, and group fairness evaluation, the work provides the first empirical validation of a causal link between demographic inferability in LLM embedding spaces and scoring error. Results show that GPT-4o reliably identifies non-native speakers (AUC > 0.85), and for every 10% increase in identification accuracy, mean absolute scoring error rises by 23.6%; no significant bias is observed for gender inference. The study uncovers a novel “implicit identification → error amplification” bias mechanism, establishing both theoretical grounding and empirical evidence to guide fairness assessment and mitigation in prompt-based AES systems.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) are widely used in Automated Essay Scoring (AES) due to their ability to capture semantic meaning. Traditional fine-tuning approaches required technical expertise, limiting accessibility for educators with limited technical backgrounds. However, prompt-based tools like ChatGPT have made AES more accessible, enabling educators to obtain machine-generated scores using natural-language prompts (i.e., the prompt-based paradigm). Despite advancements, prior studies have shown bias in fine-tuned LLMs, particularly against disadvantaged groups. It remains unclear whether such biases persist or are amplified in the prompt-based paradigm with cutting-edge tools. Since such biases are believed to stem from the demographic information embedded in pre-trained models (i.e., the ability of LLMs' text embeddings to predict demographic attributes), this study explores the relationship between the model's predictive power of students' demographic attributes based on their written works and its predictive bias in the scoring task in the prompt-based paradigm. Using a publicly available dataset of over 25,000 students' argumentative essays, we designed prompts to elicit demographic inferences (i.e., gender, first-language background) from GPT-4o and assessed fairness in automated scoring. Then we conducted multivariate regression analysis to explore the impact of the model's ability to predict demographics on its scoring outcomes. Our findings revealed that (i) prompt-based LLMs can somewhat infer students' demographics, particularly their first-language backgrounds, from their essays; (ii) scoring biases are more pronounced when the LLM correctly predicts students' first-language background than when it does not; and (iii) scoring error for non-native English speakers increases when the LLM correctly identifies them as non-native.
Problem

Research questions and friction points this paper is trying to address.

Investigates bias in prompt-based LLMs for essay scoring
Examines LLMs' ability to infer student demographics from essays
Assesses impact of demographic prediction on scoring fairness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses prompt-based LLMs for automated essay scoring
Analyzes demographic bias in LLM scoring outcomes
Leverages GPT-4o for demographic inference and fairness assessment