Agglomerating Large Vision Encoders via Distillation for VFSS Segmentation

📅 2025-04-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address performance bottlenecks in lightweight models for medical image segmentation—stemming from limited model capacity and suboptimal training strategies—this paper proposes a multi-source large-model knowledge distillation framework tailored for videofluoroscopic swallowing study (VFSS) segmentation. Methodologically, it introduces the first collaborative distillation mechanism integrating three cross-task medical foundation models: MedSAM, RAD-DINO, and MedCLIP. The framework jointly aligns features and outputs across teacher models and incorporates a lightweight student encoder. Crucially, it breaks the conventional paradigm requiring task-specific model training: a single student model generalizes across 12 diverse segmentation tasks. Experiments on the VFSS dataset demonstrate a 2.0% average Dice coefficient improvement over single-teacher distillation baselines, with significant gains in cross-task transferability and robustness.

Technology Category

Application Category

📝 Abstract
The deployment of foundation models for medical imaging has demonstrated considerable success. However, their training overheads associated with downstream tasks remain substantial due to the size of the image encoders employed, and the inference complexity is also significantly high. Although lightweight variants have been obtained for these foundation models, their performance is constrained by their limited model capacity and suboptimal training strategies. In order to achieve an improved tradeoff between complexity and performance, we propose a new framework to improve the performance of low complexity models via knowledge distillation from multiple large medical foundation models (e.g., MedSAM, RAD-DINO, MedCLIP), each specializing in different vision tasks, with the goal to effectively bridge the performance gap for medical image segmentation tasks. The agglomerated model demonstrates superior generalization across 12 segmentation tasks, whereas specialized models require explicit training for each task. Our approach achieved an average performance gain of 2% in Dice coefficient compared to simple distillation.
Problem

Research questions and friction points this paper is trying to address.

Reduce training overhead of large medical vision encoders
Improve performance of lightweight medical foundation models
Bridge performance gap in medical image segmentation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Knowledge distillation from multiple large models
Improved performance with low complexity
Superior generalization across segmentation tasks
🔎 Similar Papers
No similar papers found.
C
Chengxi Zeng
University of Bristol, UK
Y
Yuxuan Jiang
University of Bristol, UK
F
Fan Zhang
University of Bristol, UK
A
A. Gambaruto
University of Bristol, UK
Tilo Burghardt
Tilo Burghardt
University of Bristol
Animal BiometricsAI for ConservationConservation TechnologyComputer VisionImageomics