🤖 AI Summary
In medical image segmentation, pixel-level annotations are prohibitively expensive, while existing prompt learning methods rely on fully supervised segmentation masks. This work proposes the first prompt learning framework requiring only bounding-box annotations for adapting foundation models (e.g., SAM) under weak supervision. Our method jointly enforces bounding-box constraints, performs multi-objective optimization, and generates self-iterative pseudo-labels to progressively refine segmentation. Crucially, it eliminates handcrafted prompts by automatically decoupling high-quality point and box prompts directly from bounding-box annotations, and integrates a model self-feedback mechanism to enhance segmentation robustness. Evaluated on diverse multimodal medical datasets, our approach achieves a mean Dice score of 84.90%, substantially outperforming both state-of-the-art fully supervised and weakly supervised methods. This demonstrates its efficacy and generalizability under low-labeling-cost regimes.
📝 Abstract
Pixel-wise annotations are notoriously labourious and costly to obtain in the medical domain. To mitigate this burden, weakly supervised approaches based on bounding box annotations-much easier to acquire-offer a practical alternative. Vision foundation models have recently shown noteworthy segmentation performance when provided with prompts such as points or bounding boxes. Prompt learning exploits these models by adapting them to downstream tasks and automating segmentation, thereby reducing user intervention. However, existing prompt learning approaches depend on fully annotated segmentation masks. This paper proposes a novel framework that combines the representational power of foundation models with the annotation efficiency of weakly supervised segmentation. More specifically, our approach automates prompt generation for foundation models using only bounding box annotations. Our proposed optimization scheme integrates multiple constraints derived from box annotations with pseudo-labels generated by the prompted foundation model. Extensive experiments across multimodal datasets reveal that our weakly supervised method achieves an average Dice score of 84.90% in a limited data setting, outperforming existing fully-supervised and weakly-supervised approaches. The code is available at https://github.com/Minimel/box-prompt-learning-VFM.git