Dynamic Gradient Sparsification Training for Few-Shot Fine-tuning of CT Lymph Node Segmentation Foundation Model

📅 2025-03-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the scarcity of lymph node (LN) segmentation annotations in head-and-neck CT imaging, this work introduces the first large-scale LN annotation dataset comprising 36,106 annotated LNs. We propose Dynamic Gradient Sparsification Training (DGST), a few-shot fine-tuning method that adaptively identifies and updates the most sensitive parameter subset via hierarchical freezing and gradient importance estimation—thereby preserving foundational model knowledge while enhancing task specificity. Implemented within the nnUNetv2 framework, DGST achieves state-of-the-art performance on SegRap2023 and LNQ2023 benchmarks: with only 1–5 annotated cases per domain, it surpasses leading few-shot segmentation methods by over 8.2 percentage points in Dice score, demonstrating strong generalizability. To foster reproducibility and advance foundation models for medical image segmentation, we publicly release the dataset, trained models, and source code.

Technology Category

Application Category

📝 Abstract
Accurate lymph node (LN) segmentation is critical in radiotherapy treatment and prognosis analysis, but is limited by the need for large annotated datasets. While deep learning-based segmentation foundation models show potential in developing high-performing models with fewer samples, their medical adaptation faces LN domain-specific prior deficiencies and inefficient few-shot fine-tuning for complex clinical practices, highlighting the necessity of an LN segmentation foundation model. In this work, we annotated 36,106 visible LNs from 3,346 publicly available head-and-neck CT scans to establish a robust LN segmentation model (nnUNetv2). Building on this, we propose Dynamic Gradient Sparsification Training (DGST), a few-shot fine-tuning approach that preserves foundational knowledge while dynamically updating the most critical parameters of the LN segmentation model with few annotations. We validate it on two publicly available LN segmentation datasets: SegRap2023 and LNQ2023. The results show that DGST outperforms existing few-shot fine-tuning methods, achieving satisfactory performance with limited labeled data. We release the dataset, models and all implementations to facilitate relevant research: https://github.com/Zihaoluoh/LN-Seg-FM.
Problem

Research questions and friction points this paper is trying to address.

Addresses limited annotated datasets for lymph node segmentation.
Improves few-shot fine-tuning for medical adaptation of deep learning models.
Enhances performance of lymph node segmentation with minimal labeled data.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic Gradient Sparsification Training (DGST) introduced
Few-shot fine-tuning for LN segmentation model
Preserves foundational knowledge with minimal annotations
Zihao Luo
Zihao Luo
University of Electronic Science and Technology of China | Shanghai Innovation Institute
Medical Image AnalysisFoundation ModelAI for Science
Z
Zijun Gao
Department of Computer Science and Engineering, The Chinese University of Hong Kong, Sha Tin, Hong Kong.
W
Wenjun Liao
Department of Radiation Oncology, Sichuan Cancer Hospital and Institute, University of Electronic Science and Technology of China, Chengdu
S
Shichuan Zhang
Department of Radiation Oncology, Sichuan Cancer Hospital and Institute, University of Electronic Science and Technology of China, Chengdu
Guotai Wang
Guotai Wang
Professor, University of Electronic Science and Technology of China
medical image analysiscomputer visiondeep learning
Xiangde Luo
Xiangde Luo
Stanford University
medical image analysiscomputer visioncomputational pathology