MedGEN-Bench: Contextually entangled benchmark for open-ended multimodal medical generation

📅 2025-11-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing medical vision benchmarks suffer from three key limitations: ambiguous query formulations, oversimplified diagnostic reasoning reduced to closed-set shortcuts, and text-centric evaluation that neglects image generation capabilities. To address these, we introduce MedGEN-Bench—the first open-generation multimodal medical benchmark, covering six imaging modalities, 16 clinical tasks, and 28 subtasks. We propose context-entangled instructions and a novel three-tiered evaluation framework integrating pixel-level metrics, semantic textual analysis, and expert-rated clinical relevance scoring—enabling joint assessment of visual question answering, image editing, and multimodal generation. Comprehensive evaluation of 18 state-of-the-art models reveals critical bottlenecks in cross-modal reasoning and clinically grounded semantic generation. MedGEN-Bench establishes a new paradigm for real-world clinical AI deployment and provides a reproducible, task-diverse benchmark for rigorous multimodal medical AI evaluation.

Technology Category

Application Category

📝 Abstract
As Vision-Language Models (VLMs) increasingly gain traction in medical applications, clinicians are progressively expecting AI systems not only to generate textual diagnoses but also to produce corresponding medical images that integrate seamlessly into authentic clinical workflows. Despite the growing interest, existing medical visual benchmarks present notable limitations. They often rely on ambiguous queries that lack sufficient relevance to image content, oversimplify complex diagnostic reasoning into closed-ended shortcuts, and adopt a text-centric evaluation paradigm that overlooks the importance of image generation capabilities. To address these challenges, we introduce extsc{MedGEN-Bench}, a comprehensive multimodal benchmark designed to advance medical AI research. MedGEN-Bench comprises 6,422 expert-validated image-text pairs spanning six imaging modalities, 16 clinical tasks, and 28 subtasks. It is structured into three distinct formats: Visual Question Answering, Image Editing, and Contextual Multimodal Generation. What sets MedGEN-Bench apart is its focus on contextually intertwined instructions that necessitate sophisticated cross-modal reasoning and open-ended generative outputs, moving beyond the constraints of multiple-choice formats. To evaluate the performance of existing systems, we employ a novel three-tier assessment framework that integrates pixel-level metrics, semantic text analysis, and expert-guided clinical relevance scoring. Using this framework, we systematically assess 10 compositional frameworks, 3 unified models, and 5 VLMs.
Problem

Research questions and friction points this paper is trying to address.

Evaluating multimodal medical AI systems with ambiguous queries
Addressing oversimplified diagnostic reasoning in medical benchmarks
Developing image generation assessment beyond text-centric evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Contextually intertwined multimodal benchmark for medical AI
Three-tier assessment framework with clinical relevance scoring
Open-ended generative outputs beyond multiple-choice formats
🔎 Similar Papers
No similar papers found.
J
Junjie Yang
South China University of Technology
Y
Yuhao Yan
Sun Yat-sen University
G
Gang Wu
Hangzhou Dianzi University
Y
Yuxuan Wang
Zhejiang University of Finance & Economics
R
Ruoyu Liang
National University of Singapore
X
Xinjie Jiang
Hangzhou Dianzi University
Xiang Wan
Xiang Wan
Shenzhen Research Institute of Big Data
BioinformaticsData MiningBig Data Analysis
F
Fenglei Fan
City University of Hong Kong
Y
Yongquan Zhang
Zhejiang University of Finance & Economics
Feiwei Qin
Feiwei Qin
Prof. College of Computer Science, Hangzhou Dianzi University
Artificial IntelligenceComputer-Aided DesignComputer VisionMedical Image Analysis
C
Changmiao Wang
Shenzhen Research Institute of Big Data