🤖 AI Summary
Existing medical vision benchmarks suffer from three key limitations: ambiguous query formulations, oversimplified diagnostic reasoning reduced to closed-set shortcuts, and text-centric evaluation that neglects image generation capabilities. To address these, we introduce MedGEN-Bench—the first open-generation multimodal medical benchmark, covering six imaging modalities, 16 clinical tasks, and 28 subtasks. We propose context-entangled instructions and a novel three-tiered evaluation framework integrating pixel-level metrics, semantic textual analysis, and expert-rated clinical relevance scoring—enabling joint assessment of visual question answering, image editing, and multimodal generation. Comprehensive evaluation of 18 state-of-the-art models reveals critical bottlenecks in cross-modal reasoning and clinically grounded semantic generation. MedGEN-Bench establishes a new paradigm for real-world clinical AI deployment and provides a reproducible, task-diverse benchmark for rigorous multimodal medical AI evaluation.
📝 Abstract
As Vision-Language Models (VLMs) increasingly gain traction in medical applications, clinicians are progressively expecting AI systems not only to generate textual diagnoses but also to produce corresponding medical images that integrate seamlessly into authentic clinical workflows. Despite the growing interest, existing medical visual benchmarks present notable limitations. They often rely on ambiguous queries that lack sufficient relevance to image content, oversimplify complex diagnostic reasoning into closed-ended shortcuts, and adopt a text-centric evaluation paradigm that overlooks the importance of image generation capabilities. To address these challenges, we introduce extsc{MedGEN-Bench}, a comprehensive multimodal benchmark designed to advance medical AI research. MedGEN-Bench comprises 6,422 expert-validated image-text pairs spanning six imaging modalities, 16 clinical tasks, and 28 subtasks. It is structured into three distinct formats: Visual Question Answering, Image Editing, and Contextual Multimodal Generation. What sets MedGEN-Bench apart is its focus on contextually intertwined instructions that necessitate sophisticated cross-modal reasoning and open-ended generative outputs, moving beyond the constraints of multiple-choice formats. To evaluate the performance of existing systems, we employ a novel three-tier assessment framework that integrates pixel-level metrics, semantic text analysis, and expert-guided clinical relevance scoring. Using this framework, we systematically assess 10 compositional frameworks, 3 unified models, and 5 VLMs.