Promptable cancer segmentation using minimal expert-curated data

📅 2025-05-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Medical image cancer segmentation heavily relies on large-scale, high-quality annotated data; however, expert annotation is costly and suffers from substantial inter-observer variability, hindering clinical deployment. Existing weakly supervised and promptable segmentation methods either require massive paired pathology–imaging datasets or fail to generalize robustly to lesion regions. To address this, we propose a dual-classifier collaborative prompting framework that achieves accurate single-point-prompt-driven segmentation using only 24 fully annotated and 8 weakly annotated images. By synergistically integrating fully supervised and weakly supervised signals, our method establishes a multi-level classifier collaboration mechanism for prompt-guided region search. In prostate cancer segmentation, it matches the performance of fully supervised models and significantly outperforms state-of-the-art promptable approaches (e.g., SAM), reducing annotation requirements to just 1% of those needed by comparable methods.

Technology Category

Application Category

📝 Abstract
Automated segmentation of cancer on medical images can aid targeted diagnostic and therapeutic procedures. However, its adoption is limited by the high cost of expert annotations required for training and inter-observer variability in datasets. While weakly-supervised methods mitigate some challenges, using binary histology labels for training as opposed to requiring full segmentation, they require large paired datasets of histology and images, which are difficult to curate. Similarly, promptable segmentation aims to allow segmentation with no re-training for new tasks at inference, however, existing models perform poorly on pathological regions, again necessitating large datasets for training. In this work we propose a novel approach for promptable segmentation requiring only 24 fully-segmented images, supplemented by 8 weakly-labelled images, for training. Curating this minimal data to a high standard is relatively feasible and thus issues with the cost and variability of obtaining labels can be mitigated. By leveraging two classifiers, one weakly-supervised and one fully-supervised, our method refines segmentation through a guided search process initiated by a single-point prompt. Our approach outperforms existing promptable segmentation methods, and performs comparably with fully-supervised methods, for the task of prostate cancer segmentation, while using substantially less annotated data (up to 100X less). This enables promptable segmentation with very minimal labelled data, such that the labels can be curated to a very high standard.
Problem

Research questions and friction points this paper is trying to address.

Automated cancer segmentation reduces costly expert annotations
Weakly-supervised methods need large paired histology-image datasets
Promptable segmentation struggles with pathological regions without big datasets
Innovation

Methods, ideas, or system contributions that make the work stand out.

Promptable segmentation with minimal expert-curated data
Combines weakly and fully-supervised classifiers
Guided search process from single-point prompt
🔎 Similar Papers
No similar papers found.
L
Lynn Karam
Department of Medical Physics and Biomedical Engineering, University College London, London, UK; UCL Hawkes Institute, University College London, London, UK
Y
Yipei Wang
Department of Medical Physics and Biomedical Engineering, University College London, London, UK; UCL Hawkes Institute, University College London, London, UK
V
Veeru Kasivisvanathan
Division of Surgery and Interventional Science, University College London, London, UK
Mirabela Rusu
Mirabela Rusu
Assistant Professor of Radiology at Stanford University
multi-protocolmulti-scale data fusionMRIHistologycomputational imaging
Y
Yipeng Hu
Department of Medical Physics and Biomedical Engineering, University College London, London, UK; UCL Hawkes Institute, University College London, London, UK
Shaheer U. Saeed
Shaheer U. Saeed
University College London
Machine LearningMedical Image ComputingReinforcement Learning