🤖 AI Summary
This study investigates whether large language models (LLMs) replicate human demographic biases—particularly in subjective judgments of politeness and offensiveness. Method: Leveraging the multi-demographic annotation dataset POPQUORN, we systematically evaluate prediction biases across four state-of-the-art LLMs—GPT-4, Claude, Llama, and Gemini—and introduce demographic prompting to inject Black or Asian identity cues. Contribution/Results: We present the first empirical evidence that these models exhibit significant bias favoring White and female labels. Crucially, demographic prompting not only fails to mitigate bias but *reduces* accuracy both overall and specifically for the targeted demographic groups. This reveals a novel phenomenon—“identity prompt failure”—where identity-aware prompting exacerbates, rather than alleviates, structural bias in subjective NLP tasks. Our findings provide critical empirical grounding for modeling and intervening in social biases within LLMs, challenging prevailing assumptions about controllable bias mitigation strategies.
📝 Abstract
Human perception of language depends on personal backgrounds like gender and ethnicity. While existing studies have shown that large language models (LLMs) hold values that are closer to certain societal groups, it is unclear whether their prediction behaviors on subjective NLP tasks also exhibit a similar bias. In this study, leveraging the POPQUORN dataset which contains annotations of diverse demographic backgrounds, we conduct a series of experiments on four popular LLMs to investigate their capability to understand group differences and potential biases in their predictions for politeness and offensiveness. We find that for both tasks, model predictions are closer to the labels from White and female participants. We further explore prompting with the target demographic labels and show that including the target demographic in the prompt actually worsens the model's performance. More specifically, when being prompted to respond from the perspective of"Black"and"Asian"individuals, models show lower performance in predicting both overall scores as well as the scores from corresponding groups. Our results suggest that LLMs hold gender and racial biases for subjective NLP tasks and that demographic-infused prompts alone may be insufficient to mitigate such effects. Code and data are available at https://github.com/Jiaxin-Pei/LLM-Group-Bias.