π€ AI Summary
Medical vision-language models (VLMs) suffer from poor generalization and high clinical deployment risk due to spurious correlations induced by distribution shifts between imaging protocols and radiology report texts. To address this, we propose DRiFt, a structured feature disentanglement framework that explicitly separates clinically relevant vision-language signals from protocol-related nuisance variationsβa first in medical VLMs. DRiFt integrates LoRA-based parameter-efficient fine-tuning with learnable prompt tokens to construct high-fidelity, cross-modal aligned image-text pairs, thereby substantially reducing model uncertainty. On in-distribution evaluation, DRiFt achieves +11.4% Top-1 accuracy and +3.3% Macro-F1 over baselines. Crucially, it demonstrates markedly improved robustness across multiple unseen domain datasets, effectively mitigating performance degradation caused by domain shift. This work establishes a novel paradigm for reliable few-shot adaptation of medical VLMs.
π Abstract
Medical vision-language models (VLMs) offer promise for clinical decision support, yet their reliability under distribution shifts remains a major concern for safe deployment. These models often learn task-agnostic correlations due to variability in imaging protocols and free-text reports, limiting their generalizability and increasing the risk of failure in real-world settings. We propose DRiFt, a structured feature decoupling framework that explicitly separates clinically relevant signals from task-agnostic noise using parameter-efficient tuning (LoRA) and learnable prompt tokens. To enhance cross-modal alignment and reduce uncertainty, we curate high-quality, clinically grounded image-text pairs by generating captions for a diverse medical dataset. Our approach improves in-distribution performance by +11.4% Top-1 accuracy and +3.3% Macro-F1 over prior prompt-based methods, while maintaining strong robustness across unseen datasets. Ablation studies reveal that disentangling task-relevant features and careful alignment significantly enhance model generalization and reduce unpredictable behavior under domain shift. These insights contribute toward building safer, more trustworthy VLMs for clinical use. The code is available at https://github.com/rumaima/DRiFt.