🤖 AI Summary
Multimodal large language models (MLLMs) exhibit critical capability bottlenecks in chemical and materials science research—particularly in data extraction, experimental understanding, and result interpretation. Method: We introduce MaCBench, the first cross-modal benchmark tailored to authentic scientific workflows, comprising real-world spectrograms, experimental apparatus diagrams, and literature-based image-text pairs, enabling zero-shot and few-shot vision-language evaluation. Contribution/Results: Comprehensive evaluation reveals that while MLLMs excel at basic perception (e.g., device identification and standardized data extraction), they fundamentally struggle with higher-order scientific reasoning—including spatial reasoning, cross-modal information fusion, and multi-step logical inference. This work establishes the first fine-grained, task-driven multimodal evaluation framework specifically for chemistry and materials science, rigorously delineating current MLLM capabilities in scientific AI and providing a foundational benchmark to guide future model development and scientific AI advancement.
📝 Abstract
Recent advancements in artificial intelligence have sparked interest in scientific assistants that could support researchers across the full spectrum of scientific workflows, from literature review to experimental design and data analysis. A key capability for such systems is the ability to process and reason about scientific information in both visual and textual forms - from interpreting spectroscopic data to understanding laboratory setups. Here, we introduce MaCBench, a comprehensive benchmark for evaluating how vision-language models handle real-world chemistry and materials science tasks across three core aspects: data extraction, experimental understanding, and results interpretation. Through a systematic evaluation of leading models, we find that while these systems show promising capabilities in basic perception tasks - achieving near-perfect performance in equipment identification and standardized data extraction - they exhibit fundamental limitations in spatial reasoning, cross-modal information synthesis, and multi-step logical inference. Our insights have important implications beyond chemistry and materials science, suggesting that developing reliable multimodal AI scientific assistants may require advances in curating suitable training data and approaches to training those models.