🤖 AI Summary
Traditional machine learning treats systematic variations in human annotations—such as ambiguity, interpretive disagreement, and errors—as mere noise, thereby obscuring the true sources of error. This work reframes annotation as a measurement process and introduces the first statistical framework that simultaneously accommodates both shared and individualized notions of “ground truth.” The proposed model decomposes annotation variability into four interpretable components: instance difficulty, annotator bias, contextual noise, and relational consistency. Leveraging probabilistic modeling, we estimate and validate these components on multi-annotator natural language inference data. Empirical results confirm the presence of all four sources of variation and demonstrate that the framework effectively disentangles annotator behavior, offering actionable insights for constructing higher-quality datasets.
📝 Abstract
Supervised machine learning assumes that labeled data provide accurate measurements of the concepts models are meant to learn. Yet in practice, human labeling introduces systematic variation arising from ambiguous items, divergent interpretations, and simple mistakes. Machine learning research commonly treats all disagreement as noise, which obscures these distinctions and limits our understanding of what models actually learn. This paper reframes annotation as a measurement process and introduces a statistical framework for decomposing labeling outcomes into interpretable sources of variation: instance difficulty, annotator bias, situational noise, and relational alignment. The framework extends classical measurement-error models to accommodate both shared and individualized notions of truth, reflecting traditional and human label variation interpretations of error, and provides a diagnostic for assessing which regime better characterizes a given task. Applying the proposed model to a multi-annotator natural language inference dataset, we find empirical evidence for all four theorized components and demonstrate the effectiveness of our approach. We conclude with implications for data-centric machine learning and outline how this approach can guide the development of a more systematic science of labeling.