ReXInTheWild: A Unified Benchmark for Medical Photograph Understanding

๐Ÿ“… 2026-03-19
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the lack of comprehensive evaluation benchmarks for visual-language models in understanding real-world, everyday medical photographsโ€”a task requiring both natural image perception and clinical reasoning. To bridge this gap, the authors introduce the first fine-grained, clinically grounded multimodal benchmark derived from authentic medical photography, comprising 484 literature-sourced images and 955 expert-validated multiple-choice questions spanning seven clinical domains. Systematic evaluation using this benchmark reveals that general-purpose models (e.g., Gemini-1.5 achieving 78% accuracy) substantially outperform specialized medical models such as MedGemma (37% accuracy). The study further identifies four characteristic error patterns, offering clear directions for future model refinement and development in medical multimodal understanding.

Technology Category

Application Category

๐Ÿ“ Abstract
Everyday photographs taken with ordinary cameras are already widely used in telemedicine and other online health conversations, yet no comprehensive benchmark evaluates whether vision-language models can interpret their medical content. Analyzing these images requires both fine-grained natural image understanding and domain-specific medical reasoning, a combination that challenges both general-purpose and specialized models. We introduce ReXInTheWild, a benchmark of 955 clinician-verified multiple-choice questions spanning seven clinical topics across 484 photographs sourced from the biomedical literature. When evaluated on ReXInTheWild, leading multimodal large language models show substantial performance variation: Gemini-3 achieves 78% accuracy, followed by Claude Opus 4.5 (72%) and GPT-5 (68%), while the medical specialist model MedGemma reaches only 37%. A systematic error analysis also reveals four categories of common errors, ranging from low-level geometric errors to high-level reasoning failures and requiring different mitigation strategies. ReXInTheWild provides a challenging, clinically grounded benchmark at the intersection of natural image understanding and medical reasoning. The dataset is available on HuggingFace.
Problem

Research questions and friction points this paper is trying to address.

medical photograph understanding
vision-language models
benchmark
multimodal large language models
medical reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

medical photograph understanding
vision-language models
multimodal benchmark
clinical reasoning
error analysis
๐Ÿ”Ž Similar Papers
No similar papers found.
Oishi Banerjee
Oishi Banerjee
PhD. Student, Harvard University
AI for MedicineAI for Healthcare
S
Sung Eun Kim
Department of Biomedical Informatics, Harvard Medical School, Boston, MA; National Strategic Technology Research Institute, Seoul National University Hospital, Seoul, Republic of Korea
A
Alexandra N. Willauer
Department of Biomedical Informatics, Harvard Medical School, Boston, MA; Department of Medicine, Division of Gastroenterology, Massachusetts General Hospital, Boston, MA
J
Julius M. Kernbach
Department of Neuroradiology, Heidelberg University Hospital, Heidelberg, Germany
A
Abeer Rihan Alomaish
King Abdulaziz Medical City, National Guard Health Affairs, Riyadh, Saudi Arabia
R
Reema Abdulwahab S. Alghamdi
King Abdulaziz Medical City, National Guard Health Affairs, Riyadh, Saudi Arabia
H
Hassan Rayhan Alomaish
King Abdulaziz Medical City, National Guard Health Affairs, Riyadh, Saudi Arabia
Mohammed Baharoon
Mohammed Baharoon
Harvard Medical School
Computer VisionMultimodal LearningUnsupervised LearninigFoundation Models
Xiaoman Zhang
Xiaoman Zhang
Harvard University
AI for MedicineMedical Image Analysis
J
Julian Nicolas Acosta
Department of Biomedical Informatics, Harvard Medical School, Boston, MA
C
Christine Zhou
Department of Biomedical Informatics, Harvard Medical School, Boston, MA; Division of Pulmonary, Critical Care, and Sleep Medicine, University of Cincinnati, Cincinnati, OH
P
Pranav Rajpurkar
Department of Biomedical Informatics, Harvard Medical School, Boston, MA