🤖 AI Summary
Evaluating anatomical segmentation models in the absence of ground-truth annotations remains a critical challenge in medical AI. Method: This paper proposes the first anatomy-term-driven unsupervised collaborative evaluation framework. It harmonizes and standardizes multi-model segmentation outputs at the anatomical-structure level via the JSON-Seg schema, enabling automated, cross-model and cross-structure comparison. The framework integrates a 3D Slicer plugin and OHIF Viewer for interactive visualization and provides Summary Plot analytics. Contribution/Results: Evaluated on the NLST CT dataset across 31 anatomical structures using six leading open-source models (e.g., TotalSegmentator, MOOSE), the framework successfully identified high inter-model consistency in lung segmentation and systematic failures in vertebral and rib segmentation—demonstrating its validity and practical utility for ground-truth-free model assessment.
📝 Abstract
Purpose AI-based methods for anatomy segmentation can help automate characterization of large imaging datasets. The growing number of similar in functionality models raises the challenge of evaluating them on datasets that do not contain ground truth annotations. We introduce a practical framework to assist in this task. Approach We harmonize the segmentation results into a standard, interoperable representation, which enables consistent, terminology-based labeling of the structures. We extend 3D Slicer to streamline loading and comparison of these harmonized segmentations, and demonstrate how standard representation simplifies review of the results using interactive summary plots and browser-based visualization using OHIF Viewer. To demonstrate the utility of the approach we apply it to evaluating segmentation of 31 anatomical structures (lungs, vertebrae, ribs, and heart) by six open-source models - TotalSegmentator 1.5 and 2.6, Auto3DSeg, MOOSE, MultiTalent, and CADS - for a sample of Computed Tomography (CT) scans from the publicly available National Lung Screening Trial (NLST) dataset. Results We demonstrate the utility of the framework in enabling automating loading, structure-wise inspection and comparison across models. Preliminary results ascertain practical utility of the approach in allowing quick detection and review of problematic results. The comparison shows excellent agreement segmenting some (e.g., lung) but not all structures (e.g., some models produce invalid vertebrae or rib segmentations). Conclusions The resources developed are linked from https://imagingdatacommons.github.io/segmentation-comparison/ including segmentation harmonization scripts, summary plots, and visualization tools. This work assists in model evaluation in absence of ground truth, ultimately enabling informed model selection.