Prompt learning with bounding box constraints for medical image segmentation

📅 2025-07-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In medical image segmentation, pixel-level annotations are prohibitively expensive, while existing prompt learning methods rely on fully supervised segmentation masks. This work proposes the first prompt learning framework requiring only bounding-box annotations for adapting foundation models (e.g., SAM) under weak supervision. Our method jointly enforces bounding-box constraints, performs multi-objective optimization, and generates self-iterative pseudo-labels to progressively refine segmentation. Crucially, it eliminates handcrafted prompts by automatically decoupling high-quality point and box prompts directly from bounding-box annotations, and integrates a model self-feedback mechanism to enhance segmentation robustness. Evaluated on diverse multimodal medical datasets, our approach achieves a mean Dice score of 84.90%, substantially outperforming both state-of-the-art fully supervised and weakly supervised methods. This demonstrates its efficacy and generalizability under low-labeling-cost regimes.

Technology Category

Application Category

📝 Abstract
Pixel-wise annotations are notoriously labourious and costly to obtain in the medical domain. To mitigate this burden, weakly supervised approaches based on bounding box annotations-much easier to acquire-offer a practical alternative. Vision foundation models have recently shown noteworthy segmentation performance when provided with prompts such as points or bounding boxes. Prompt learning exploits these models by adapting them to downstream tasks and automating segmentation, thereby reducing user intervention. However, existing prompt learning approaches depend on fully annotated segmentation masks. This paper proposes a novel framework that combines the representational power of foundation models with the annotation efficiency of weakly supervised segmentation. More specifically, our approach automates prompt generation for foundation models using only bounding box annotations. Our proposed optimization scheme integrates multiple constraints derived from box annotations with pseudo-labels generated by the prompted foundation model. Extensive experiments across multimodal datasets reveal that our weakly supervised method achieves an average Dice score of 84.90% in a limited data setting, outperforming existing fully-supervised and weakly-supervised approaches. The code is available at https://github.com/Minimel/box-prompt-learning-VFM.git
Problem

Research questions and friction points this paper is trying to address.

Reduces need for costly pixel-wise medical image annotations
Automates prompt generation using bounding box annotations
Improves segmentation accuracy in limited data settings
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automates prompt generation using box annotations
Integrates box constraints with model pseudo-labels
Leverages foundation models for weak supervision
🔎 Similar Papers
No similar papers found.
M
Mélanie Gaillochet
École de Technologie Supérieure, Montréal, QC H3C 1K3, Canada; Mila - Quebec AI Institute, Montréal, QC H2S 3H1, Canada; Polytechnique Montréal, QC H3T 1J4, Canada.
Mehrdad Noori
Mehrdad Noori
PhD Student, Ecole de Technologie Supérieure
Deep LearningMachine LearningComputer VisionMultimodal AI
S
Sahar Dastani
École de Technologie Supérieure, Montréal, QC H3C 1K3, Canada
Christian Desrosiers
Christian Desrosiers
Professor, École de technologie supérieure - LIVIA - Regroupement stratégique REPARTI
Data MiningMachine LearningPattern RecognitionComputer VisionMedical Imaging
H
Hervé Lombaert
École de Technologie Supérieure, Montréal, QC H3C 1K3, Canada; Mila - Quebec AI Institute, Montréal, QC H2S 3H1, Canada; Polytechnique Montréal, QC H3T 1J4, Canada.