🤖 AI Summary
Current medical AI models for chest X-ray (CXR) interpretation rely on end-to-end black-box paradigms, neglecting the multi-stage clinical reasoning employed by radiologists—resulting in poor interpretability, limited error traceability, and weak clinical adaptability. Method: We introduce CXRTrek, the first vision-language dataset explicitly modeling an 8-stage progressive diagnostic workflow (428K images, 11M Q&A pairs), and propose CXRTrekNet, a dedicated vision-language large model that jointly models task-specific requirements, contextual cues, and historical reasoning dependencies. Our methodology includes multi-stage VQA construction, formalization of clinical reasoning processes, diagnosis-path-driven architecture design, and cross-dataset generalization training. Contribution/Results: CXRTrekNet achieves state-of-the-art performance across all tasks on the CXRTrek benchmark, outperforming existing medical vision-language large models. Moreover, it attains SOTA generalization across five external datasets on multiple diagnostic tasks.
📝 Abstract
Artificial intelligence (AI)-based chest X-ray (CXR) interpretation assistants have demonstrated significant progress and are increasingly being applied in clinical settings. However, contemporary medical AI models often adhere to a simplistic input-to-output paradigm, directly processing an image and an instruction to generate a result, where the instructions may be integral to the model's architecture. This approach overlooks the modeling of the inherent diagnostic reasoning in chest X-ray interpretation. Such reasoning is typically sequential, where each interpretive stage considers the images, the current task, and the contextual information from previous stages. This oversight leads to several shortcomings, including misalignment with clinical scenarios, contextless reasoning, and untraceable errors. To fill this gap, we construct CXRTrek, a new multi-stage visual question answering (VQA) dataset for CXR interpretation. The dataset is designed to explicitly simulate the diagnostic reasoning process employed by radiologists in real-world clinical settings for the first time. CXRTrek covers 8 sequential diagnostic stages, comprising 428,966 samples and over 11 million question-answer (Q&A) pairs, with an average of 26.29 Q&A pairs per sample. Building on the CXRTrek dataset, we propose a new vision-language large model (VLLM), CXRTrekNet, specifically designed to incorporate the clinical reasoning flow into the VLLM framework. CXRTrekNet effectively models the dependencies between diagnostic stages and captures reasoning patterns within the radiological context. Trained on our dataset, the model consistently outperforms existing medical VLLMs on the CXRTrek benchmarks and demonstrates superior generalization across multiple tasks on five diverse external datasets. The dataset and model can be found in our repository (https://github.com/guanjinquan/CXRTrek).