MedFrameQA: A Multi-Image Medical VQA Benchmark for Clinical Reasoning

📅 2025-05-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Clinical diagnosis relies heavily on comparative analysis of longitudinal medical imaging, yet existing medical visual question answering (VQA) benchmarks support only single-image reasoning. To bridge this gap, we introduce MedMultiVQA—the first clinical-reasoning-oriented multi-image medical VQA benchmark—covering nine anatomical systems and 43 organs. Each question comprises 2–5 temporally coherent medical images (9,237 frames extracted from 3,420 clinical videos), forming 2,851 high-quality question-answer triplets. Our contributions include: (1) the first evaluation paradigm for multi-image medical VQA; (2) a novel question-generation pipeline leveraging automated video-frame extraction and clinical temporal logic modeling; and (3) a multi-stage quality control protocol integrating model-based pre-screening and expert validation. Under a unified evaluation framework, we assess ten state-of-the-art multimodal large language models (MLLMs), revealing an average accuracy below 50%, highlighting critical bottlenecks—including weak cross-image evidence aggregation and severe error propagation.

Technology Category

Application Category

📝 Abstract
Existing medical VQA benchmarks mostly focus on single-image analysis, yet clinicians almost always compare a series of images before reaching a diagnosis. To better approximate this workflow, we introduce MedFrameQA -- the first benchmark that explicitly evaluates multi-image reasoning in medical VQA. To build MedFrameQA both at scale and in high-quality, we develop 1) an automated pipeline that extracts temporally coherent frames from medical videos and constructs VQA items whose content evolves logically across images, and 2) a multiple-stage filtering strategy, including model-based and manual review, to preserve data clarity, difficulty, and medical relevance. The resulting dataset comprises 2,851 VQA pairs (gathered from 9,237 high-quality frames in 3,420 videos), covering nine human body systems and 43 organs; every question is accompanied by two to five images. We comprehensively benchmark ten advanced Multimodal LLMs -- both proprietary and open source, with and without explicit reasoning modules -- on MedFrameQA. The evaluation challengingly reveals that all models perform poorly, with most accuracies below 50%, and accuracy fluctuates as the number of images per question increases. Error analysis further shows that models frequently ignore salient findings, mis-aggregate evidence across images, and propagate early mistakes through their reasoning chains; results also vary substantially across body systems, organs, and modalities. We hope this work can catalyze research on clinically grounded, multi-image reasoning and accelerate progress toward more capable diagnostic AI systems.
Problem

Research questions and friction points this paper is trying to address.

Lack of multi-image reasoning benchmarks in medical VQA
Difficulty in scaling and ensuring quality in medical VQA datasets
Poor performance of advanced Multimodal LLMs in multi-image medical reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automated pipeline extracts coherent medical video frames
Multiple-stage filtering ensures data quality and relevance
Benchmarks ten advanced Multimodal LLMs for clinical reasoning
🔎 Similar Papers
No similar papers found.