🤖 AI Summary
To address performance bottlenecks in lightweight models for medical image segmentation—stemming from limited model capacity and suboptimal training strategies—this paper proposes a multi-source large-model knowledge distillation framework tailored for videofluoroscopic swallowing study (VFSS) segmentation. Methodologically, it introduces the first collaborative distillation mechanism integrating three cross-task medical foundation models: MedSAM, RAD-DINO, and MedCLIP. The framework jointly aligns features and outputs across teacher models and incorporates a lightweight student encoder. Crucially, it breaks the conventional paradigm requiring task-specific model training: a single student model generalizes across 12 diverse segmentation tasks. Experiments on the VFSS dataset demonstrate a 2.0% average Dice coefficient improvement over single-teacher distillation baselines, with significant gains in cross-task transferability and robustness.
📝 Abstract
The deployment of foundation models for medical imaging has demonstrated considerable success. However, their training overheads associated with downstream tasks remain substantial due to the size of the image encoders employed, and the inference complexity is also significantly high. Although lightweight variants have been obtained for these foundation models, their performance is constrained by their limited model capacity and suboptimal training strategies. In order to achieve an improved tradeoff between complexity and performance, we propose a new framework to improve the performance of low complexity models via knowledge distillation from multiple large medical foundation models (e.g., MedSAM, RAD-DINO, MedCLIP), each specializing in different vision tasks, with the goal to effectively bridge the performance gap for medical image segmentation tasks. The agglomerated model demonstrates superior generalization across 12 segmentation tasks, whereas specialized models require explicit training for each task. Our approach achieved an average performance gain of 2% in Dice coefficient compared to simple distillation.