Assessing the Reliability of LLMs Annotations in the Context of Demographic Bias and Model Explanation

📅 2025-07-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how demographic bias and model interpretability affect labeling reliability in large language models (LLMs), addressing three core questions: (1) the explanatory power of annotator demographic attributes on label decisions; (2) the reliability of generative AI (GenAI) as an annotator, and whether persona-based prompting improves its alignment with human judgments; and (3) how content-driven explanations and annotation protocols moderate fairness. Using generalized linear mixed models, XAI techniques (e.g., feature attribution), and iterative human-AI annotation experiments, we find that demographic factors explain only 8% of label variance; GenAI achieves higher inter-annotator consistency than humans without persona prompting; and XAI reveals that models rely on discriminatory semantic cues—not demographic features—during labeling. Our key contribution is demonstrating that a “content-first” explanation paradigm outperforms persona simulation for achieving fair, reliable annotations—and that naive persona prompting may degrade model–human alignment.

Technology Category

Application Category

📝 Abstract
Understanding the sources of variability in annotations is crucial for developing fair NLP systems, especially for tasks like sexism detection where demographic bias is a concern. This study investigates the extent to which annotator demographic features influence labeling decisions compared to text content. Using a Generalized Linear Mixed Model, we quantify this inf luence, finding that while statistically present, demographic factors account for a minor fraction ( 8%) of the observed variance, with tweet content being the dominant factor. We then assess the reliability of Generative AI (GenAI) models as annotators, specifically evaluating if guiding them with demographic personas improves alignment with human judgments. Our results indicate that simplistic persona prompting often fails to enhance, and sometimes degrades, performance compared to baseline models. Furthermore, explainable AI (XAI) techniques reveal that model predictions rely heavily on content-specific tokens related to sexism, rather than correlates of demographic characteristics. We argue that focusing on content-driven explanations and robust annotation protocols offers a more reliable path towards fairness than potentially persona simulation.
Problem

Research questions and friction points this paper is trying to address.

Investigates demographic bias impact on LLM annotations
Evaluates GenAI reliability with demographic personas
Analyzes content vs demographic factors in annotations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Used Generalized Linear Mixed Model for bias analysis
Tested Generative AI with demographic personas
Applied explainable AI to identify content-driven predictions