🤖 AI Summary
This study addresses the reliability issues of promptable segmentation models in gynecological MRI arising from user-dependent prompt variations. The authors propose an interpretable framework that, for the first time, disentangles prompt dependency into two distinct components: prompt ambiguity (inter-user variability) and local sensitivity (imprecision in user interaction). Built upon the Segment Anything Model, the framework employs quantitative metrics to analyze the relationship between prompt variability and segmentation performance for uterine and bladder delineation. Evaluated on two female pelvic MRI datasets, the proposed metrics exhibit strong negative correlations with segmentation accuracy while demonstrating low mutual correlation, thereby effectively revealing distinct prompt-related failure modes. This approach provides a principled basis for evaluating model robustness and supports safer clinical deployment of interactive segmentation systems.
📝 Abstract
Promptable segmentation models (e.g., the Segment Anything Models) enable generalizable, zero-shot segmentation across diverse domains. Although predictions are deterministic for a fixed image-prompt pair, the robustness of these models to variations in user prompts, referred to as prompt dependence, remains underexplored. In safety-critical workflows with substantial inter-user variability, interpretable and informative frameworks are needed to evaluate prompt dependence. In this work, we assess the reliability of promptable segmentation by analyzing and measuring its sensitivity to prompt variability. We introduce the first formulation of prompt dependence that explicitly disentangles prompt ambiguity (inter-user variability) from local sensitivity (interaction imprecision), offering an interpretable view of segmentation robustness. Experiments on two female pelvic MRI datasets for uterus and bladder segmentation reveal a strong negative correlation between both metrics and segmentation performance, highlighting the value of our framework for assessing robustness. The two metrics have low mutual correlation, supporting the disentangled design of our formulation, and provide meaningful indicators of prompt-related failure modes.