Benchmarking Cross-Scale Perception Ability of Large Multimodal Models in Material Science

📅 2026-03-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of scientific evaluation benchmarks capable of assessing large language models’ understanding of multiscale structure–property relationships in materials science, spanning from microscopic to macroscopic levels. To this end, we introduce CSMBench, the first multiscale benchmark specifically designed for materials science, encompassing atomic, micro-, meso-, and macroscopic scales. Built upon high-quality figures from peer-reviewed journals, CSMBench features open-ended image captioning and multiple-choice figure-caption matching tasks to systematically evaluate both open- and closed-source large models on cross-scale reasoning. Our experiments reveal significant performance disparities across scales, highlighting current models’ deficiencies in hierarchical materials understanding and offering clear guidance for future model development.

Technology Category

Application Category

📝 Abstract
Unraveling the hierarchical structure-property relationships is the central challenge of materials science, necessitating the interpretation of data across vast physical scales from micro to macro. Despite the rapid integration of Large Multimodal Models (LMMs) into scientific workflows, existing scientific benchmarks primarily focus on general chart interpretation or isolated common-sense reasoning, failing to capture reasoning ability across intricate physical dimensions. To address this, we introduce CSMBench, a dataset comprising 1,041 high-quality figures curated from premier journals up to September 2025. CSMBench categorizes data into four scientifically distinct regimes: atomic, micro, meso, and macro scales, strictly aligning with the focus and definitions in materials study. Through open-ended figure description and multiple-choice caption matching tasks, we evaluate state-of-the-art open-source and closed-source models. Our analysis identifies that performance varies significantly across physical scales due to the distinct visual characteristics, highlighting the limitations of current generalist models and identifying critical directions for achieving hierarchical and accurate understanding in materials research. The CSMBench is publicly released at: https://huggingface.co/datasets/lututu/CSMBench.
Problem

Research questions and friction points this paper is trying to address.

cross-scale perception
large multimodal models
materials science
hierarchical structure-property relationships
scientific benchmarking
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cross-scale perception
Large Multimodal Models
Materials science
Hierarchical reasoning
Scientific benchmarking
🔎 Similar Papers
2023-11-30Citations: 0
Y
Yuting Zheng
Shanghai Artificial Intelligence Laboratory, Shanghai Jiao Tong University
Zijian Chen
Zijian Chen
Shanghai Jiao Tong University | Shanghai AI Laboratory
Image/Video Quality AssessmentLarge Multi-modal Models
J
Jia Qi
Shanghai Artificial Intelligence Laboratory