Dissecting Subjectivity and the"Ground Truth"Illusion in Data Annotation

📅 2026-02-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the prevailing tendency in machine learning to mischaracterize annotation disagreement as mere noise, thereby overlooking its value as a sociotechnical signal. Through a systematic literature review and reflexive thematic analysis of 346 papers from seven top-tier conferences (2020–2025), the work uncovers the mechanisms behind the “consensus trap” and its detrimental effects on algorithmic fairness. It critiques the “noise sensor” fallacy and advocates reinterpreting disagreement as a high-fidelity signal, proposing a new annotation paradigm centered on pluralistic experiential mappings rather than a singular “ground truth.” The analysis further reveals structural inequities—including the imposition of Western norms through geographic hegemony and annotators’ compliance driven by economic precarity—highlighting the erasure of positional visibility and the role of models as mediators of bias.

Technology Category

Application Category

📝 Abstract
In machine learning,"ground truth"refers to the assumed correct labels used to train and evaluate models. However, the foundational"ground truth"paradigm rests on a positivistic fallacy that treats human disagreement as technical noise rather than a vital sociotechnical signal. This systematic literature review analyzes research published between 2020 and 2025 across seven premier venues: ACL, AIES, CHI, CSCW, EAAMO, FAccT, and NeurIPS, investigating the mechanisms in data annotation practices that facilitate this"consensus trap". Our identification phase captured 30,897 records, which were refined via a tiered keyword filtration schema to a high-recall corpus of 3,042 records for manual screening, resulting in a final included corpus of 346 papers for qualitative synthesis. Our reflexive thematic analysis reveals that systemic failures in positional legibility, combined with the recent architectural shift toward human-as-verifier models, specifically the reliance on model-mediated annotations, introduce deep-seated anchoring bias and effectively remove human voices from the loop. We further demonstrate how geographic hegemony imposes Western norms as universal benchmarks, often enforced by the performative alignment of precarious data workers who prioritize requester compliance over honest subjectivity to avoid economic penalties. Critiquing the"noisy sensor"fallacy, where statistical models misdiagnose cultural pluralism as random error, we argue for reclaiming disagreement as a high-fidelity signal essential for building culturally competent models. To address these systemic tensions, we propose a roadmap for pluralistic annotation infrastructures that shift the objective from discovering a singular"right"answer to mapping the diversity of human experience.
Problem

Research questions and friction points this paper is trying to address.

ground truth
data annotation
subjectivity
human disagreement
annotation bias
Innovation

Methods, ideas, or system contributions that make the work stand out.

ground truth critique
pluralistic annotation
model-mediated annotation
anchoring bias
cultural competence
🔎 Similar Papers
No similar papers found.