🤖 AI Summary
Medical vision-language models (VLMs) suffer from clinical hallucinations in chest X-ray analysis, undermining their reliability. To address this, we propose a low-resource preference optimization framework. First, we construct fine-grained, multi-task instruction data. Second, we introduce a confidence-similarity joint hard-example mining strategy to improve sample efficiency and distribution balance. Crucially, we pioneer the use of counterfactual reasoning to automatically generate clinically aware, fine-grained preference labels—eliminating the need for expert annotation. Our technical pipeline integrates supervised fine-tuning, token-level confidence modeling, retrieval augmentation, and counterfactual rationale generation. Experiments demonstrate an 8.93% relative performance gain using only 5% of supervised data, achieving state-of-the-art results across multiple chest X-ray understanding tasks. The approach significantly reduces annotation cost while enhancing model interpretability and clinical alignment.
📝 Abstract
Vision-language models (VLMs) are prone to hallucinations that critically compromise reliability in medical applications. While preference optimization can mitigate these hallucinations through clinical feedback, its implementation faces challenges such as clinically irrelevant training samples, imbalanced data distributions, and prohibitive expert annotation costs. To address these challenges, we introduce CheXPO, a Chest X-ray Preference Optimization strategy that combines confidence-similarity joint mining with counterfactual rationale. Our approach begins by synthesizing a unified, fine-grained multi-task chest X-ray visual instruction dataset across different question types for supervised fine-tuning (SFT). We then identify hard examples through token-level confidence analysis of SFT failures and use similarity-based retrieval to expand hard examples for balancing preference sample distributions, while synthetic counterfactual rationales provide fine-grained clinical preferences, eliminating the need for additional expert input. Experiments show that CheXPO achieves 8.93% relative performance gain using only 5% of SFT samples, reaching state-of-the-art performance across diverse clinical tasks and providing a scalable, interpretable solution for real-world radiology applications.