Marked Pedagogies: Examining Linguistic Biases in Personalized Automated Writing Feedback

📅 2026-03-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how large language models (LLMs) exhibit systematic biases in providing personalized writing feedback based on student identity attributes such as race, gender, and learning needs. By embedding student demographic labels into prompts, the authors elicited feedback from GPT-4o, GPT-3.5-turbo, Llama-3.3 70B, and Llama-3.1 8B on 600 identical eighth-grade argumentative essays and analyzed lexical shifts using an enhanced Marked Words framework. Findings reveal that, despite identical essay content, models consistently displayed excessive praise and avoided substantive criticism for students from racial minorities, those with disabilities, or those from non-dominant linguistic backgrounds—reflecting implicit assumptions about their capabilities. The paper introduces the concept of “Marked Pedagogies,” offering the first systematic account of how educational AI enacts bias through positive overcompensation and softened critique, underscoring the urgent need for transparency and accountability in AI-driven instruction.

Technology Category

Application Category

📝 Abstract
Effective personalized feedback is critical to students' literacy development. Though LLM-powered tools now promise to automate such feedback at scale, LLMs are not language-neutral: they privilege standard academic English and reproduce social stereotypes, raising concerns about how "personalization" shapes the feedback students receive. We examine how four widely used LLMs (GPT-4o, GPT-3.5-turbo, Llama-3.3 70B, Llama-3.1 8B) adapt written feedback in response to student attributes. Using 600 eighth-grade persuasive essays from the PERSUADE dataset, we generated feedback under prompt conditions embedding gender, race/ethnicity, learning needs, achievement, and motivation. We analyze lexical shifts across model outputs by adapting the Marked Words framework. Our results reveal systematic, stereotype-aligned shifts in feedback conditioned on presumed student attributes--even when essay content was identical. Feedback for students marked by race, language, or disability often exhibited positive feedback bias and feedback withholding bias--overuse of praise, less substantive critique, and assumptions of limited ability. Across attributes, models tailored not only what content was emphasized but also how writing was judged and how students were addressed. We term these instructional orientations Marked Pedagogies and highlight the need for transparency and accountability in automated feedback tools.
Problem

Research questions and friction points this paper is trying to address.

linguistic bias
personalized feedback
LLM
educational equity
stereotypes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Marked Pedagogies
linguistic bias
automated writing feedback
large language models
educational equity
🔎 Similar Papers
No similar papers found.