🤖 AI Summary
To address weak volumetric perception and limited interactivity of existing foundation models in interactive 3D biomedical image segmentation, this paper proposes a novel training paradigm integrating dynamic volumetric prompting with content-aware adaptive cropping. The method simulates realistic multi-turn user feedback to enhance the encoder’s understanding of 3D anatomical structures and improve response efficiency, enabling efficient single-GPU training. Key contributions include: (i) the first introduction of dynamic volumetric prompting into interactive 3D segmentation; (ii) a semantic-saliency-driven adaptive cropping strategy that preserves both global context and local details; and (iii) a sequential feedback learning framework. Evaluated on the Foundation Model Competition, our model achieves Dice=0.6385, Normalized Surface Distance (NSD)=0.6614, and Dice-AUC=2.4799—significantly outperforming baseline methods.
📝 Abstract
Interactive 3D biomedical image segmentation requires efficient models that can iteratively refine predictions based on user prompts. Current foundation models either lack volumetric awareness or suffer from limited interactive capabilities. We propose a training strategy that combines dynamic volumetric prompt generation with content-aware adaptive cropping to optimize the use of the image encoder. Our method simulates realistic user interaction patterns during training while addressing the computational challenges of learning from sequential refinement feedback on a single GPU. For efficient training, we initialize our network using the publicly available weights from the nnInteractive segmentation model. Evaluation on the extbf{Foundation Models for Interactive 3D Biomedical Image Segmentation} competition demonstrates strong performance with an average final Dice score of 0.6385, normalized surface distance of 0.6614, and area-under-the-curve metrics of 2.4799 (Dice) and 2.5671 (NSD).