Lightweight Joint Optimization of General-Purpose Vision-Language Models and Retrievers for Medical Diagnosis

📅 2025-08-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the misalignment between retrievers and lightweight vision-language models (LVLMs) in retrieval-augmented generation (RAG) for medical image diagnosis, this work proposes the first end-to-end joint fine-tuning framework that co-optimizes a multimodal retriever and a general-purpose LVLM. By enabling gradient backpropagation from the LVLM to the retriever, the framework allows the retriever to learn from LVLM prediction errors—eliminating the need for domain-specific pretraining while matching the performance of medical-specialized models. We empirically identify and quantify a critical challenge: “retrieval-image divergence,” where inconsistent retrieved images lead to unstable predictions—a key source of diagnostic failure. Joint optimization significantly improves robustness on such hard cases. Evaluated on clinical multi-label classification and visual question answering tasks, our method achieves performance on par with medical-specialized models and substantially outperforms conventional RAG, especially on challenging examples—though further convergence toward oracle performance remains an open direction.

Technology Category

Application Category

📝 Abstract
Clinical decision-making often involves interpreting images (e.g., radiology) for making diagnoses. Retrieving relevant visual information from medical literature and hospital records could enhance diagnostic accuracy. In this paper, we develop a model in which a multimodal retriever is jointly optimized with an LVLM for medical diagnosis, unlike standard RAG where LVLM error signal is not propagated down to the retriever. We show that using only general-purpose backbones, with only lightweight fine-tuning, our model is able to achieve competitive results with medically-pretrained models across clinical multi-label classification and visual question answering tasks. In a novel analysis, we additionally find that in many cases different top retrieved images each lead to different predictions for a given target, and that these cases are empirically challenging for all models, even for non-retrieval models. Our joint retrieval optimization significantly improves these challenging cases over standard RAG. However, oracle analysis reveals that while the correct diagnosis is frequently achievable using one of the top retrieved images, in practice there is a large performance gap from the oracle, and rerankers using frontier LVLMs do not close this gap -- leaving ample room for improvement by future methods. Code will be made publicly available.
Problem

Research questions and friction points this paper is trying to address.

Jointly optimizes multimodal retriever and LVLM for medical diagnosis
Addresses error propagation limitation in standard RAG systems
Improves challenging cases where retrieved images yield conflicting predictions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Jointly optimizes multimodal retriever with LVLM
Uses lightweight fine-tuning of general-purpose backbones
Improves challenging medical diagnosis cases over RAG
N
Nir Mazor
School of Computer Science and Engineering, The Hebrew University of Jerusalem
Tom Hope
Tom Hope
Independent Researcher
Sociology of communityHuman-Computer InteractionUser ExperienceLGBTQ