AstroMMBench: A Benchmark for Evaluating Multimodal Large Language Models Capabilities in Astronomy

📅 2025-09-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing benchmarks inadequately assess multimodal large language models’ (MLLMs) capabilities in domain-specific scientific tasks such as astronomical image understanding. To address this gap, we introduce AstroMMBench—the first dedicated, expert-curated multimodal benchmark for astrophysics—comprising 621 multiple-choice questions across six subfields, all rigorously validated by domain experts. We propose a novel evaluation paradigm integrating principled scientific question design with expert verification to systematically evaluate 25 state-of-the-art MLLMs. Results reveal substantial performance disparities across subfields; Ovis2-34B achieves the highest accuracy (70.5%), underscoring the benchmark’s rigor and discriminative power. This work fills a critical void in MLLM evaluation for specialized scientific domains and establishes a scalable, high-fidelity assessment infrastructure to advance AI-augmented astronomical research.

Technology Category

Application Category

📝 Abstract
Astronomical image interpretation presents a significant challenge for applying multimodal large language models (MLLMs) to specialized scientific tasks. Existing benchmarks focus on general multimodal capabilities but fail to capture the complexity of astronomical data. To bridge this gap, we introduce AstroMMBench, the first comprehensive benchmark designed to evaluate MLLMs in astronomical image understanding. AstroMMBench comprises 621 multiple-choice questions across six astrophysical subfields, curated and reviewed by 15 domain experts for quality and relevance. We conducted an extensive evaluation of 25 diverse MLLMs, including 22 open-source and 3 closed-source models, using AstroMMBench. The results show that Ovis2-34B achieved the highest overall accuracy (70.5%), demonstrating leading capabilities even compared to strong closed-source models. Performance showed variations across the six astrophysical subfields, proving particularly challenging in domains like cosmology and high-energy astrophysics, while models performed relatively better in others, such as instrumentation and solar astrophysics. These findings underscore the vital role of domain-specific benchmarks like AstroMMBench in critically evaluating MLLM performance and guiding their targeted development for scientific applications. AstroMMBench provides a foundational resource and a dynamic tool to catalyze advancements at the intersection of AI and astronomy.
Problem

Research questions and friction points this paper is trying to address.

Evaluating MLLMs' astronomical image interpretation capabilities
Addressing lack of domain-specific multimodal benchmarks in astronomy
Assessing model performance across six astrophysical subfields
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces AstroMMBench for astronomy image evaluation
Evaluates 25 MLLMs with expert-curated multiple-choice questions
Identifies performance variations across six astrophysical subfields
🔎 Similar Papers
No similar papers found.
J
Jinghang Shi
University of Chinese Academy of Sciences
X
Xiao Yu Tang
Zhejiang Laboratory
Y
Yang Hunag
University of Chinese Academy of Sciences
Yuyang Li
Yuyang Li
Institute for AI, Peking University
Robotic ManipulationTactile SensingHuman-Object Interaction
X
Xiaokong
University of Chinese Academy of Sciences
Yanxia Zhang
Yanxia Zhang
Research Scientist at Toyota Research Institute
Human Action ForecastingMultimodal LearningEye TrackingComputer VisionMachine Learning
C
Caizhan Yue
Tianjin University