Can Multimodal LLMs See Materials Clearly? A Multimodal Benchmark on Materials Characterization

📅 2025-09-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current multimodal large language models (MLLMs) face dual bottlenecks in materials characterization image understanding: insufficient domain-specific expert knowledge and inadequate complex visual perception. To address this, we introduce MatCha—the first multimodal benchmark tailored to materials science—spanning four key research stages, 21 realistic tasks, and 1,500 expert-level reasoning questions derived from authentic scientific practice. We systematically evaluate mainstream MLLMs using few-shot learning and chain-of-thought prompting. Results reveal a substantial performance gap between state-of-the-art models and human experts, particularly in higher-order reasoning and fine-grained microstructural identification. MatCha fills a critical void in domain-specific multimodal evaluation benchmarks and, for the first time, empirically exposes the adaptive limitations of MLLMs in real-world materials research scenarios. It thus establishes an essential benchmark for guiding model refinement, domain alignment, and rigorous assessment of scientific reasoning capabilities in multimodal AI.

Technology Category

Application Category

📝 Abstract
Materials characterization is fundamental to acquiring materials information, revealing the processing-microstructure-property relationships that guide material design and optimization. While multimodal large language models (MLLMs) have recently shown promise in generative and predictive tasks within materials science, their capacity to understand real-world characterization imaging data remains underexplored. To bridge this gap, we present MatCha, the first benchmark for materials characterization image understanding, comprising 1,500 questions that demand expert-level domain expertise. MatCha encompasses four key stages of materials research comprising 21 distinct tasks, each designed to reflect authentic challenges faced by materials scientists. Our evaluation of state-of-the-art MLLMs on MatCha reveals a significant performance gap compared to human experts. These models exhibit degradation when addressing questions requiring higher-level expertise and sophisticated visual perception. Simple few-shot and chain-of-thought prompting struggle to alleviate these limitations. These findings highlight that existing MLLMs still exhibit limited adaptability to real-world materials characterization scenarios. We hope MatCha will facilitate future research in areas such as new material discovery and autonomous scientific agents. MatCha is available at https://github.com/FreedomIntelligence/MatCha.
Problem

Research questions and friction points this paper is trying to address.

Assessing MLLMs' understanding of materials characterization imaging data
Evaluating expert-level image comprehension in materials science applications
Identifying performance gaps in multimodal models for real-world materials analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

MatCha benchmark for materials characterization
Multimodal LLMs evaluation on imaging data
1500 expert-level domain expertise questions
🔎 Similar Papers
No similar papers found.
Z
Zhengzhao Lai
The Chinese University of Hong Kong, Shenzhen
Y
Youbin Zheng
Northeastern University
Zhenyang Cai
Zhenyang Cai
The Chinese University of Hong Kong, Shenzhen
Large Language Models
Haonan Lyu
Haonan Lyu
Zhejiang University
J
Jinpu Yang
Northeastern University
H
Hongqing Liang
Zhejiang University
Y
Yan Hu
The Chinese University of Hong Kong, Shenzhen
Benyou Wang
Benyou Wang
Assistant Professor, The Chinese University of Hong Kong, Shenzhen
large language modelsnatural language processinginformation retrievalapplied machine learning