🤖 AI Summary
To address insufficient segmentation accuracy in 3D microscopic images and the low efficiency caused by heavy reliance on manual correction, this paper proposes an uncertainty-guided human-in-the-loop correction framework. It introduces voxel-wise uncertainty maps—used for the first time—to direct user attention to high-risk regions, enabling precise and efficient manual intervention. Integrating 3D deep learning, Bayesian uncertainty estimation, and interactive visualization, we develop VessQC, an open-source tool. A user study on real biological data demonstrates that error detection recall significantly improves from 67% to 94.0% (p = 0.007), without a statistically significant increase in total correction time. This work bridges the gap between model outputs and human expert needs, establishing a new paradigm for high-fidelity, scalable 3D bioimage analysis.
📝 Abstract
Accurate 3D microscopy image segmentation is critical for quantitative bioimage analysis but even state-of-the-art foundation models yield error-prone results. Therefore, manual curation is still widely used for either preparing high-quality training data or fixing errors before analysis. We present VessQC, an open-source tool for uncertainty-guided curation of large 3D microscopy segmentations. By integrating uncertainty maps, VessQC directs user attention to regions most likely containing biologically meaningful errors. In a preliminary user study uncertainty-guided correction significantly improved error detection recall from 67% to 94.0% (p=0.007) without a significant increase in total curation time. VessQC thus enables efficient, human-in-the-loop refinement of volumetric segmentations and bridges a key gap in real-world applications between uncertainty estimation and practical human-computer interaction. The software is freely available at github.com/MMV-Lab/VessQC.