🤖 AI Summary
Existing open-source multimodal large language models exhibit limited performance in scientific image understanding, primarily due to narrow domain coverage, coarse annotations, and weak semantic alignment in current datasets. To address this, this work introduces OmniScience, a large-scale multimodal dataset spanning over ten scientific disciplines and comprising 1.5 million image–caption–context triplets. We propose a dynamic model routing re-annotation pipeline coupled with an expert-in-the-loop quality filtering mechanism to generate highly informative, self-contained image descriptions. Furthermore, we establish the first Caption QA evaluation protocol tailored for scientific images. Fine-tuning Qwen2.5-VL-3B on OmniScience yields significant improvements, increasing scores by 0.378 on MM-MT-Bench and 0.140 on MMMU, while boosting image–text similarity from 0.769 to 0.956.
📝 Abstract
Multimodal Large Language Models demonstrate strong performance on natural image understanding, yet exhibit limited capability in interpreting scientific images, including but not limited to schematic diagrams, experimental characterizations, and analytical charts. This limitation is particularly pronounced in open-source MLLMs. The gap largely stems from existing datasets with limited domain coverage, coarse structural annotations, and weak semantic grounding. We introduce OmniScience, a large-scale, high-fidelity multi-modal dataset comprising 1.5 million figure-caption-context triplets, spanning more than 10 major scientific disciplines. To obtain image caption data with higher information density and accuracy for multi-modal large-model training, we develop a dynamic model-routing re-captioning pipeline that leverages state-of-the-art multi-modal large language models to generate dense, self-contained descriptions by jointly synthesizing visual features, original figure captions, and corresponding in-text references authored by human scientists. The pipeline is further reinforced with rigorous quality filtering and alignment with human expert judgments, ensuring both factual accuracy and semantic completeness, and boosts the image-text multi-modal similarity score from 0.769 to 0.956. We further propose a caption QA protocol as a proxy task for evaluating visual understanding. Under this setting, Qwen2.5-VL-3B model finetuned on OmniScience show substantial gains over baselines, achieving a gain of 0.378 on MM-MT-Bench and a gain of 0.140 on MMMU.