Minority Reports: Balancing Cost and Quality in Ground Truth Data Annotation

📅 2025-04-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the tension between high annotation costs and stringent accuracy requirements, this paper proposes an active pruning method targeting minority reports—i.e., annotator responses deviating from the consensus. It is the first to jointly model image ambiguity, inter-annotator variability, and annotator fatigue as core causal factors underlying minority reports. The method introduces a dynamic redundancy reduction mechanism grounded in the probability of majority-vote bias, enabling pre-annotation identification and removal of tasks prone to erroneous labels. Experiments across multiple computer vision benchmarks demonstrate that the approach reduces annotation volume by over 60% on average, incurs only marginal label quality degradation, and saves approximately 6.6 person-days of labor—while supporting customizable accuracy–cost trade-offs. The key contributions are: (1) the first systematic causal modeling of minority report generation, and (2) a transparent, tunable framework for optimizing annotation redundancy.

Technology Category

Application Category

📝 Abstract
High-quality data annotation is an essential but laborious and costly aspect of developing machine learning-based software. We explore the inherent tradeoff between annotation accuracy and cost by detecting and removing minority reports -- instances where annotators provide incorrect responses -- that indicate unnecessary redundancy in task assignments. We propose an approach to prune potentially redundant annotation task assignments before they are executed by estimating the likelihood of an annotator disagreeing with the majority vote for a given task. Our approach is informed by an empirical analysis over computer vision datasets annotated by a professional data annotation platform, which reveals that the likelihood of a minority report event is dependent primarily on image ambiguity, worker variability, and worker fatigue. Simulations over these datasets show that we can reduce the number of annotations required by over 60% with a small compromise in label quality, saving approximately 6.6 days-equivalent of labor. Our approach provides annotation service platforms with a method to balance cost and dataset quality. Machine learning practitioners can tailor annotation accuracy levels according to specific application needs, thereby optimizing budget allocation while maintaining the data quality necessary for critical settings like autonomous driving technology.
Problem

Research questions and friction points this paper is trying to address.

Balancing annotation cost and quality in machine learning
Detecting and removing incorrect annotator responses (minority reports)
Reducing redundant annotations while maintaining acceptable label accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Detect and remove minority reports to reduce redundancy
Estimate annotator disagreement likelihood to prune tasks
Balance cost and quality via empirical dataset analysis
🔎 Similar Papers
No similar papers found.