Evaluating AI systems under uncertain ground truth: a case study in dermatology

📅 2023-07-05
🏛️ arXiv.org
📈 Citations: 4
Influential: 1
📄 PDF
🤖 AI Summary
This work addresses the neglect of ground-truth uncertainty in medical AI evaluation—particularly in dermatology, where diagnoses often manifest as differential diagnoses with inter-expert disagreement. We propose an uncertainty-aware evaluation framework that models annotation aggregation as posterior inference, uniquely disentangling ground-truth uncertainty into two components: annotator reliability (annotation uncertainty) and intrinsic disease ambiguity (inherent uncertainty). Building upon plausibility distributions, we formulate an uncertainty-aware evaluation paradigm and introduce novel calibration metrics. Our method integrates a probabilistic extension of the Plackett–Luce model with inverse rank normalization (IRN), enabling interpretable uncertainty quantification. Empirical validation on skin image classification reveals substantial ground-truth uncertainty across many samples; conventional IRN markedly overestimates model performance, whereas our framework accurately identifies high-risk predictions and delivers calibrated, uncertainty-aware assessment.
📝 Abstract
For safety, AI systems in health undergo thorough evaluations before deployment, validating their predictions against a ground truth that is assumed certain. However, this is actually not the case and the ground truth may be uncertain. Unfortunately, this is largely ignored in standard evaluation of AI models but can have severe consequences such as overestimating the future performance. To avoid this, we measure the effects of ground truth uncertainty, which we assume decomposes into two main components: annotation uncertainty which stems from the lack of reliable annotations, and inherent uncertainty due to limited observational information. This ground truth uncertainty is ignored when estimating the ground truth by deterministically aggregating annotations, e.g., by majority voting or averaging. In contrast, we propose a framework where aggregation is done using a statistical model. Specifically, we frame aggregation of annotations as posterior inference of so-called plausibilities, representing distributions over classes in a classification setting, subject to a hyper-parameter encoding annotator reliability. Based on this model, we propose a metric for measuring annotation uncertainty and provide uncertainty-adjusted metrics for performance evaluation. We present a case study applying our framework to skin condition classification from images where annotations are provided in the form of differential diagnoses. The deterministic adjudication process called inverse rank normalization (IRN) from previous work ignores ground truth uncertainty in evaluation. Instead, we present two alternative statistical models: a probabilistic version of IRN and a Plackett-Luce-based model. We find that a large portion of the dataset exhibits significant ground truth uncertainty and standard IRN-based evaluation severely over-estimates performance without providing uncertainty estimates.
Problem

Research questions and friction points this paper is trying to address.

Evaluating AI systems with uncertain medical ground truth
Addressing expert disagreement in dermatology diagnoses
Providing performance metrics with uncertainty estimates
Innovation

Methods, ideas, or system contributions that make the work stand out.

Statistical aggregation for uncertain ground truth
Distributions inferred from expert annotations
Sampling probabilities for performance metrics
David Stutz
David Stutz
Research Scientist, DeepMind
deep learningai agentsai for scienceuncertainty estimationcomputer vision
Ali Taylan Cemgil
Ali Taylan Cemgil
Google DeepMind
Abhijit Guha Roy
Abhijit Guha Roy
Google Gemini
Large Language Models
T
Tatiana Matejovicova
Google DeepMind
Melih Barsbey
Melih Barsbey
Imperial College London
compressibilityrobust generalization
P
Patricia Strachan
Google
M
M. Schaekermann
Google
J
J. Freyberg
Google
R
R. Rikhye
Google
B
Beverly Freeman
Google
J
J. P. Matos
Google
U
Umesh Telang
Google
D
D. Webster
Google
Y
Yuan Liu
Google
G
G. Corrado
Google
Yossi Matias
Yossi Matias
Google
Pushmeet Kohli
Pushmeet Kohli
DeepMind
AI for ScienceMachine LearningAI SafetyComputer VisionProgram Synthesis
Y
Yun Liu
Google
A
A. Doucet
Google DeepMind
A
A. Karthikesalingam
Google