๐ค AI Summary
Current vision-language models (VLMs) for chest X-ray analysis lack interpretability, hindering clinical audit and human-AI collaboration. To address this, we propose the first reasoning-first VLM framework tailored to thoracic radiology diagnosis. It explicitly models radiologistsโ systematic differential diagnostic process via chain-of-thought prompting, generating traceable, uncertainty-aware reasoning paths with alternative hypotheses. Methodologically, the framework integrates high-fidelity visual encoding, two-stage supervised fine-tuning, and verifiability-guided reinforcement learning to jointly model multiple abnormalities. Experiments demonstrate competitive performance on multi-label classification tasks. Radiologist evaluations confirm that the generated reasoning trajectories significantly enhance diagnostic confidence, accelerate report generation, and enable error tracing and decision audit. This work establishes a novel paradigm for trustworthy AI-assisted diagnosis in medical imaging.
๐ Abstract
Vision-language models (VLMs) have shown strong promise for medical image analysis, but most remain opaque, offering predictions without the transparent, stepwise reasoning clinicians rely on. We present a framework that brings chain-of-thought (CoT) reasoning to chest X-ray interpretation. Inspired by reasoning-first training paradigms, our approach is designed to learn how experts reason, not just what they conclude, by aligning intermediate steps with observable image evidence and radiology workflow. Beyond accuracy, the explicit reasoning traces support clinical auditability: they reveal why a conclusion was reached, which alternatives were considered, and where uncertainty remains, enabling quality assurance, error analysis, and safer human-AI collaboration.
Our model couples high-fidelity visual encoding with a two-stage training recipe: a reasoning-style supervised fine-tuning (SFT) followed by reinforcement learning (RL) that uses verifiable rewards over a list of X-ray abnormalities. The model outputs reasoning that mirrors radiologists systematic thought process, uncertainty, and differential diagnosis. In out-of-distribution evaluation, the approach achieves competitive multi-label classification while improving interpretability. In a reader study with expert radiologists, full reasoning traces increased confidence, supported error auditing, and reduced time to finalize reports. We release code and the model NV-Reason-CXR-3B to support community progress toward trustworthy, explainable AI in chest radiography and other medical imaging tasks where reasoning quality is as critical as prediction quality.