🤖 AI Summary
Deploying high-cost commercial medical AI systems remains infeasible in low-resource settings. Method: This paper proposes a zero-shot 3D breast MRI tumor segmentation framework based on SAM2, requiring only a single-slice bounding-box annotation and leveraging a center-outward inter-slice propagation strategy to achieve whole-volume segmentation—eliminating the need for model fine-tuning or extensive manual annotations. Results: Evaluated on a large-scale clinical cohort, the method achieves a Dice score of 0.82±0.09, outperforming both top-down and bottom-up propagation baselines. Notably, SAM2 demonstrates strong generalization and morphological robustness despite no task-specific training. This work presents the first empirical validation of general-purpose vision foundation models for few-shot, low-cost 3D medical image segmentation, establishing a clinically deployable paradigm for AI-assisted diagnosis in resource-constrained environments.
📝 Abstract
Breast MRI provides high-resolution volumetric imaging critical for tumor assessment and treatment planning, yet manual interpretation of 3D scans remains labor-intensive and subjective. While AI-powered tools hold promise for accelerating medical image analysis, adoption of commercial medical AI products remains limited in low- and middle-income countries due to high license costs, proprietary software, and infrastructure demands. In this work, we investigate whether the Segment Anything Model 2 (SAM2) can be adapted for low-cost, minimal-input 3D tumor segmentation in breast MRI. Using a single bounding box annotation on one slice, we propagate segmentation predictions across the 3D volume using three different slice-wise tracking strategies: top-to-bottom, bottom-to-top, and center-outward. We evaluate these strategies across a large cohort of patients and find that center-outward propagation yields the most consistent and accurate segmentations. Despite being a zero-shot model not trained for volumetric medical data, SAM2 achieves strong segmentation performance under minimal supervision. We further analyze how segmentation performance relates to tumor size, location, and shape, identifying key failure modes. Our results suggest that general-purpose foundation models such as SAM2 can support 3D medical image analysis with minimal supervision, offering an accessible and affordable alternative for resource-constrained settings.