Optimizing Resources for On-the-Fly Label Estimation with Multiple Unknown Medical Experts

📅 2025-10-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In medical screening, expert annotation is costly and expert competence is often unknown; moreover, existing noisy-label aggregation methods lack real-time adaptability to dynamic data and evolving expert pools. To address this, we propose a prior-free adaptive online annotation framework that employs incremental Bayesian inference and uncertainty modeling to estimate labels for streaming samples in real time—without requiring pre-labeled data or prior knowledge of expert reliability—and dynamically triggers multi-expert queries based on instance difficulty. Its core innovation lies in enabling “zero-assumption” active-learning–guided label aggregation, supporting plug-and-play integration of both new data instances and newly available annotators. Experiments on three multi-annotator medical datasets demonstrate that our method reduces expert query volume by up to 50% compared to non-adaptive baselines, while maintaining classification accuracy—thereby significantly improving annotation efficiency and system automation.

Technology Category

Application Category

📝 Abstract
Accurate ground truth estimation in medical screening programs often relies on coalitions of experts and peer second opinions. Algorithms that efficiently aggregate noisy annotations can enhance screening workflows, particularly when data arrive continuously and expert proficiency is initially unknown. However, existing algorithms do not meet the requirements for seamless integration into screening pipelines. We therefore propose an adaptive approach for real-time annotation that (I) supports on-the-fly labeling of incoming data, (II) operates without prior knowledge of medical experts or pre-labeled data, and (III) dynamically queries additional experts based on the latent difficulty of each instance. The method incrementally gathers expert opinions until a confidence threshold is met, providing accurate labels with reduced annotation overhead. We evaluate our approach on three multi-annotator classification datasets across different modalities. Results show that our adaptive querying strategy reduces the number of expert queries by up to 50% while achieving accuracy comparable to a non-adaptive baseline. Our code is available at https://github.com/tbary/MEDICS
Problem

Research questions and friction points this paper is trying to address.

Estimating medical labels without prior expert knowledge
Aggregating noisy annotations in real-time screening workflows
Reducing expert queries while maintaining labeling accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive real-time annotation without prior expert knowledge
Dynamic expert querying based on instance difficulty assessment
Confidence-driven incremental labeling with reduced annotation overhead
🔎 Similar Papers
No similar papers found.
T
Tim Bary
ICTEAM, UCLouvain, Louvain-la-Neuve, Belgium
T
Tiffanie Godelaine
ICTEAM, UCLouvain, Louvain-la-Neuve, Belgium
Axel Abels
Axel Abels
FNRS Postdoctoral researcher (CR), Machine Learning Group, Université Libre de Bruxelles
B
Benoît Macq
ICTEAM, UCLouvain, Louvain-la-Neuve, Belgium