MMXU: A Multi-Modal and Multi-X-ray Understanding Dataset for Disease Progression

📅 2025-02-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current vision-language models (VLMs) neglect temporal dynamics and integration of longitudinal clinical history in medical image diagnosis. This work addresses lesion progression detection across sequential chest X-ray examinations. We introduce MMXU, the first multimodal medical dataset supporting temporal contrastive understanding, and formally define the novel task of “multi-temporal X-ray and structured electronic health record (EHR) joint reasoning.” To tackle this, we propose MedRecord-Augmented Generation (MAG), a method that explicitly models dual historical contexts—global disease trajectory and local image-level changes—within a large vision-language model (LVLM) framework. MAG integrates an image encoder, text encoder, and a history-augmented generation module to enable fine-grained region localization and interpretable, causal diagnostic reasoning. Experiments demonstrate that MAG improves diagnostic accuracy by over 20% on MMXU-test, substantially narrowing the performance gap with human experts. Both the MMXU dataset and MAG code are publicly released.

Technology Category

Application Category

📝 Abstract
Large vision-language models (LVLMs) have shown great promise in medical applications, particularly in visual question answering (MedVQA) and diagnosis from medical images. However, existing datasets and models often fail to consider critical aspects of medical diagnostics, such as the integration of historical records and the analysis of disease progression over time. In this paper, we introduce MMXU (Multimodal and MultiX-ray Understanding), a novel dataset for MedVQA that focuses on identifying changes in specific regions between two patient visits. Unlike previous datasets that primarily address single-image questions, MMXU enables multi-image questions, incorporating both current and historical patient data. We demonstrate the limitations of current LVLMs in identifying disease progression on MMXU- extit{test}, even those that perform well on traditional benchmarks. To address this, we propose a MedRecord-Augmented Generation (MAG) approach, incorporating both global and regional historical records. Our experiments show that integrating historical records significantly enhances diagnostic accuracy by at least 20%, bridging the gap between current LVLMs and human expert performance. Additionally, we fine-tune models with MAG on MMXU- extit{dev}, which demonstrates notable improvements. We hope this work could illuminate the avenue of advancing the use of LVLMs in medical diagnostics by emphasizing the importance of historical context in interpreting medical images. Our dataset is released at href{https://github.com/linjiemu/MMXU}{https://github.com/linjiemu/MMXU}.
Problem

Research questions and friction points this paper is trying to address.

Addresses limitations in medical image analysis
Integrates historical records for disease progression
Enhances diagnostic accuracy with historical context
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-image MedVQA dataset
MedRecord-Augmented Generation (MAG)
Integration of historical records
🔎 Similar Papers
No similar papers found.