π€ AI Summary
This work addresses the challenge that conventional fine-tuning often disrupts the multimodal priors of pretrained vision-language models, thereby impairing their generalization in surgical settings. To mitigate this, the authors propose a Chain-of-Adaptation (CoA) framework, which introduces reinforcement learning into domain adaptation for surgical vision-language models for the first time. CoA employs a structured reasoning mechanism to guide knowledge injection, enabling alignment with the surgical domain while preserving the modelβs general-purpose vision-language understanding capabilities. Experimental results demonstrate that CoA significantly outperforms supervised fine-tuning on both in-distribution and out-of-distribution surgical data, achieving higher accuracy and behavioral stability while effectively maintaining core multimodal reasoning abilities.
π Abstract
Conventional fine-tuning on domain-specific datasets can inadvertently alter a model's pretrained multimodal priors, leading to reduced generalization. To address this, we propose Chain-of-Adaptation (CoA), an adaptation framework designed to integrate domain knowledge while maintaining the model's inherent reasoning and perceptual capabilities. CoA introduces a structured reasoning format that enhances domain alignment without sacrificing general multimodal competence by reinforcement learning. Experiments on standard surgical benchmarks, under both in-distribution and out-of-distribution settings, demonstrate that CoA achieves higher accuracy, stronger generalization, and more stable behavior than supervised fine-tuning. Furthermore, ablation studies confirm that CoA effectively preserves the model's core visual-language abilities, providing a reliable pathway for domain specialization in VLMs.