🤖 AI Summary
Existing benchmarks inadequately assess multimodal large language models’ (MLLMs) capabilities in domain-specific scientific tasks such as astronomical image understanding. To address this gap, we introduce AstroMMBench—the first dedicated, expert-curated multimodal benchmark for astrophysics—comprising 621 multiple-choice questions across six subfields, all rigorously validated by domain experts. We propose a novel evaluation paradigm integrating principled scientific question design with expert verification to systematically evaluate 25 state-of-the-art MLLMs. Results reveal substantial performance disparities across subfields; Ovis2-34B achieves the highest accuracy (70.5%), underscoring the benchmark’s rigor and discriminative power. This work fills a critical void in MLLM evaluation for specialized scientific domains and establishes a scalable, high-fidelity assessment infrastructure to advance AI-augmented astronomical research.
📝 Abstract
Astronomical image interpretation presents a significant challenge for applying multimodal large language models (MLLMs) to specialized scientific tasks. Existing benchmarks focus on general multimodal capabilities but fail to capture the complexity of astronomical data. To bridge this gap, we introduce AstroMMBench, the first comprehensive benchmark designed to evaluate MLLMs in astronomical image understanding. AstroMMBench comprises 621 multiple-choice questions across six astrophysical subfields, curated and reviewed by 15 domain experts for quality and relevance. We conducted an extensive evaluation of 25 diverse MLLMs, including 22 open-source and 3 closed-source models, using AstroMMBench. The results show that Ovis2-34B achieved the highest overall accuracy (70.5%), demonstrating leading capabilities even compared to strong closed-source models. Performance showed variations across the six astrophysical subfields, proving particularly challenging in domains like cosmology and high-energy astrophysics, while models performed relatively better in others, such as instrumentation and solar astrophysics. These findings underscore the vital role of domain-specific benchmarks like AstroMMBench in critically evaluating MLLM performance and guiding their targeted development for scientific applications. AstroMMBench provides a foundational resource and a dynamic tool to catalyze advancements at the intersection of AI and astronomy.