Benchmarking Direct Preference Optimization for Medical Large Vision-Language Models

📅 2026-01-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the critical challenges of alignment and reliability faced by large medical vision-language models (LVLMs) in high-stakes clinical settings, where the efficacy of existing Direct Preference Optimization (DPO) methods remains underexplored. We present the first comprehensive evaluation of nine DPO variants on LLaVA-Med and HuatuoGPT-Vision, uncovering significant performance inconsistencies across tasks and architectures, as well as notable deficiencies in visual understanding. To mitigate visual misinterpretation, we propose an explicitly tailored preference construction strategy that improves performance by 3.6% over the strongest DPO baseline on medical visual question answering. The project releases all training data, model checkpoints, and code framework to support reproducibility and further research.

Technology Category

Application Category

📝 Abstract
Large Vision-Language Models (LVLMs) hold significant promise for medical applications, yet their deployment is often constrained by insufficient alignment and reliability. While Direct Preference Optimization (DPO) has emerged as a potent framework for refining model responses, its efficacy in high-stakes medical contexts remains underexplored, lacking the rigorous empirical groundwork necessary to guide future methodological advances. To bridge this gap, we present the first comprehensive examination of diverse DPO variants within the medical domain, evaluating nine distinct formulations across two medical LVLMs: LLaVA-Med and HuatuoGPT-Vision. Our results reveal several critical limitations: current DPO approaches often yield inconsistent gains over supervised fine-tuning, with their efficacy varying significantly across different tasks and backbones. Furthermore, they frequently fail to resolve fundamental visual misinterpretation errors. Building on these insights, we present a targeted preference construction strategy as a proof-of-concept that explicitly addresses visual misinterpretation errors frequently observed in existing DPO models. This design yields a 3.6% improvement over the strongest existing DPO baseline on visual question-answering tasks. To support future research, we release our complete framework, including all training data, model checkpoints, and our codebase at https://github.com/dmis-lab/med-vlm-dpo.
Problem

Research questions and friction points this paper is trying to address.

Direct Preference Optimization
Medical Large Vision-Language Models
Visual Misinterpretation
Model Alignment
Preference Learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Direct Preference Optimization
Medical Vision-Language Models
Preference Construction
Visual Misinterpretation
Benchmarking
🔎 Similar Papers
No similar papers found.