Parameter Efficient Fine-Tuning of Segment Anything Model

📅 2025-02-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high computational cost and strong reliance on dense annotations in fine-tuning the Segment Anything Model (SAM) for biomedical image segmentation, this paper systematically evaluates nine parameter-efficient fine-tuning (PEFT) methods on SAM and introduces, for the first time, a QLoRA implementation tailored to Vision Transformers (ViTs) alongside a novel lightweight fine-tuning framework. Experiments across multiple heterogeneous biomedical datasets demonstrate that our approach reduces GPU memory consumption and training overhead by up to 70%, while achieving segmentation accuracy comparable to full-parameter fine-tuning. Key contributions include: (1) the first comprehensive PEFT benchmark for SAM in biomedical imaging; (2) an open-source, ViT-adapted QLoRA implementation; and (3) a lightweight fine-tuning paradigm that jointly optimizes efficiency and accuracy—establishing a reproducible, scalable technical pathway for medical image segmentation under resource-constrained settings.

Technology Category

Application Category

📝 Abstract
Segmentation is an important analysis task for biomedical images, enabling the study of individual organelles, cells or organs. Deep learning has massively improved segmentation methods, but challenges remain in generalization to new conditions, requiring costly data annotation. Vision foundation models, such as Segment Anything Model (SAM), address this issue through broad segmentation capabilities. However, these models still require finetuning on annotated data, although with less annotations, to achieve optimal results for new conditions. As a downside, they require more computational resources. This makes parameter-efficient finetuning (PEFT) relevant for their application. We contribute the first comprehensive study of PEFT for SAM applied to biomedical segmentation by evaluating 9 PEFT methods on diverse datasets. We also provide an implementation of QLoRA for vision transformers and a new approach for resource-efficient finetuning of SAM. Our code is publicly available at https://github.com/computational-cell-analytics/peft-sam.
Problem

Research questions and friction points this paper is trying to address.

Biomedical Image Segmentation
Segment Anything Model (SAM)
Annotated Data Reduction
Innovation

Methods, ideas, or system contributions that make the work stand out.

PEFT Methods
QLoRA
Resource-Efficient Fine-Tuning
🔎 Similar Papers
No similar papers found.