Uncertainty-Aware Adapter: Adapting Segment Anything Model (SAM) for Ambiguous Medical Image Segmentation

📅 2024-03-16
🏛️ arXiv.org
📈 Citations: 4
Influential: 1
📄 PDF
🤖 AI Summary
Medical images often exhibit ambiguous tissue boundaries, and expert annotations inherently suffer from inter- and intra-observer ambiguity and uncertainty. To address this, we propose an uncertainty-aware refinement framework for Segment Anything Model (SAM). Departing from conventional single-expert annotation paradigms, our method introduces a novel Uncertainty-Aware Adapter module that integrates a Conditional Variational Autoencoder (CVAE) to explicitly model the intrinsic variability in manual segmentations. A conditional interaction mechanism further steers SAM to generate multiple plausible segmentation hypotheses. This design enables quantifiable uncertainty estimation and diverse hypothesis sampling directly from the segmentation output. Evaluated on LIDC-IDRI and REFUGE2 benchmarks, our approach achieves new state-of-the-art performance, significantly enhancing segmentation robustness and producing clinically interpretable, uncertainty-aware segmentation maps with improved anatomical plausibility.

Technology Category

Application Category

📝 Abstract
The Segment Anything Model (SAM) gained significant success in natural image segmentation, and many methods have tried to fine-tune it to medical image segmentation. An efficient way to do so is by using Adapters, specialized modules that learn just a few parameters to tailor SAM specifically for medical images. However, unlike natural images, many tissues and lesions in medical images have blurry boundaries and may be ambiguous. Previous efforts to adapt SAM ignore this challenge and can only predict distinct segmentation. It may mislead clinicians or cause misdiagnosis, especially when encountering rare variants or situations with low model confidence. In this work, we propose a novel module called the Uncertainty-aware Adapter, which efficiently fine-tuning SAM for uncertainty-aware medical image segmentation. Utilizing a conditional variational autoencoder, we encoded stochastic samples to effectively represent the inherent uncertainty in medical imaging. We designed a new module on a standard adapter that utilizes a condition-based strategy to interact with samples to help SAM integrate uncertainty. We evaluated our method on two multi-annotated datasets with different modalities: LIDC-IDRI (lung abnormalities segmentation) and REFUGE2 (optic-cup segmentation). The experimental results show that the proposed model outperforms all the previous methods and achieves the new state-of-the-art (SOTA) on both benchmarks. We also demonstrated that our method can generate diverse segmentation hypotheses that are more realistic as well as heterogeneous.
Problem

Research questions and friction points this paper is trying to address.

Existing SAM adaptation methods ignore expert annotation uncertainty and variability
Current approaches contradict clinical practice of collective multi-expert interpretations
Single-expert paradigm fails to capture diverse valid medical image segmentations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uncertainty-aware SAM adaptation for medical images
Stochastic uncertainty sampling from Conditional Variational Autoencoder
Position-conditioned control mechanism for multi-expert knowledge
🔎 Similar Papers
No similar papers found.
M
Mingzhou Jiang
Department of Computer Science, The University of Alabama at Birmingham, Birmingham, AL 35294, United States
J
Jiaying Zhou
Department of Computer Science, The University of Alabama at Birmingham, Birmingham, AL 35294, United States
Junde Wu
Junde Wu
University of Oxford
Artificial IntelligenceAI for Medical Science
Tianyang Wang
Tianyang Wang
University of Alabama at Birmingham
machine learning (deep learning)computer vision
Yueming Jin
Yueming Jin
Assistant Professor, National University of Singapore
Medical Image AnalysisSurgical AI&RoboticsMultimodal Learning
M
Min Xu
Computational Biology Department, Carnegie Mellon University, Pittsburgh, PA 15213, United States, and also with Mohamed bin Zayed University of Artificial Intelligence, Abu Dhabi 999041, United Arab Emirates