Quantifying Ambiguity in Categorical Annotations: A Measure and Statistical Inference Framework

📅 2025-10-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses soft label distributions arising from semantic ambiguity—not annotation errors—in classification tasks. We propose a novel method to quantify aleatoric uncertainty by introducing an asymmetric “undecidable” class that distinguishes between inter-class indistinguishability and intrinsic ambiguity. Our approach defines a fuzziness measure based on a refined quadratic entropy (Gini impurity) and integrates it into a Bayesian inference framework with a Dirichlet prior, jointly modeling epistemic and aleatoric uncertainty. The framework supports both frequentist point estimation and Bayesian posterior inference. Crucially, it maps discrete response distributions to scalar interpretability metrics in [0,1], enabling quantifiable, group-level ambiguity assessment. Experiments demonstrate superior performance in uncertainty calibration and data quality evaluation, effectively informing downstream machine learning pipeline optimization.

Technology Category

Application Category

📝 Abstract
Human-generated categorical annotations frequently produce empirical response distributions (soft labels) that reflect ambiguity rather than simple annotator error. We introduce an ambiguity measure that maps a discrete response distribution to a scalar in the unit interval, designed to quantify aleatoric uncertainty in categorical tasks. The measure bears a close relationship to quadratic entropy (Gini-style impurity) but departs from those indices by treating an explicit "can't solve" category asymmetrically, thereby separating uncertainty arising from class-level indistinguishability from uncertainty due to explicit unresolvability. We analyze the measure's formal properties and contrast its behavior with a representative ambiguity measure from the literature. Moving beyond description, we develop statistical tools for inference: we propose frequentist point estimators for population ambiguity and derive the Bayesian posterior over ambiguity induced by Dirichlet priors on the underlying probability vector, providing a principled account of epistemic uncertainty. Numerical examples illustrate estimation, calibration, and practical use for dataset-quality assessment and downstream machine-learning workflows.
Problem

Research questions and friction points this paper is trying to address.

Quantifying ambiguity in categorical annotations using a scalar measure
Separating class indistinguishability from explicit unresolvability in uncertainty
Developing statistical inference tools for population ambiguity estimation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Ambiguity measure maps distributions to scalar uncertainty
Treats explicit can't-solve category asymmetrically from indistinguishability
Provides frequentist and Bayesian inference for population ambiguity
🔎 Similar Papers
No similar papers found.