π€ AI Summary
This work addresses the poor calibration of deep learning-based object detectors in microscopic imaging, which stems from annotation uncertainty and undermines their reliability in biomedical applications. To mitigate this issue, the authors propose an ensemble approach that leverages multiple expert annotations: a dedicated detector is trained for each annotator to explicitly model inter-annotator variability, and predictions are aggregated to emulate expert consensus. This strategy outperforms conventional training on merged annotations, significantly improving model calibration on a colorectal organoid dataset while maintaining high detection accuracy.
π Abstract
Deep learning-based object detectors have achieved impressive performance in microscopy imaging, yet their confidence estimates often lack calibration, limiting their reliability for biomedical applications. In this work, we introduce a new approach to improve model calibration by leveraging multi-rater annotations. We propose to train separate models on the annotations from single experts and aggregate their predictions to emulate consensus. This improves upon label sampling strategies, where models are trained on mixed annotations, and offers a more principled way to capture inter-rater variability. Experiments on a colorectal organoid dataset annotated by two experts demonstrate that our rater-specific ensemble strategy improves calibration performance while maintaining comparable detection accuracy. These findings suggest that explicitly modelling rater disagreement can lead to more trustworthy object detectors in biomedical imaging.