MatSciBench: Benchmarking the Reasoning Ability of Large Language Models in Materials Science

📅 2025-10-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current large language models (LLMs) lack systematic evaluation of scientific reasoning capabilities in materials science. Method: We introduce MatSciBench—the first comprehensive, multimodal reasoning benchmark tailored to materials science—comprising 1,340 questions across six domains and 31 subfields. It features a novel three-tier difficulty taxonomy, a fine-grained disciplinary classification scheme, and multimodal question formats integrating text, mathematical formulas, and images. The benchmark enables fine-grained analysis of advanced reasoning strategies, including chain-of-thought prompting, tool-augmented reasoning, self-correction, and retrieval-augmented generation. Contribution/Results: Empirical evaluation reveals that even the state-of-the-art model Gemini-2.5-Pro achieves less than 80% accuracy, exposing critical limitations in domain-specific scientific reasoning. MatSciBench provides a reproducible, extensible evaluation infrastructure and concrete directions for advancing materials AI.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have demonstrated remarkable abilities in scientific reasoning, yet their reasoning capabilities in materials science remain underexplored. To fill this gap, we introduce MatSciBench, a comprehensive college-level benchmark comprising 1,340 problems that span the essential subdisciplines of materials science. MatSciBench features a structured and fine-grained taxonomy that categorizes materials science questions into 6 primary fields and 31 sub-fields, and includes a three-tier difficulty classification based on the reasoning length required to solve each question. MatSciBench provides detailed reference solutions enabling precise error analysis and incorporates multimodal reasoning through visual contexts in numerous questions. Evaluations of leading models reveal that even the highest-performing model, Gemini-2.5-Pro, achieves under 80% accuracy on college-level materials science questions, highlighting the complexity of MatSciBench. Our systematic analysis of different reasoning strategie--basic chain-of-thought, tool augmentation, and self-correction--demonstrates that no single method consistently excels across all scenarios. We further analyze performance by difficulty level, examine trade-offs between efficiency and accuracy, highlight the challenges inherent in multimodal reasoning tasks, analyze failure modes across LLMs and reasoning methods, and evaluate the influence of retrieval-augmented generation. MatSciBench thus establishes a comprehensive and solid benchmark for assessing and driving improvements in the scientific reasoning capabilities of LLMs within the materials science domain.
Problem

Research questions and friction points this paper is trying to address.

Benchmarking LLM reasoning abilities in materials science with comprehensive college-level questions
Evaluating performance across difficulty levels and multimodal reasoning tasks
Analyzing effectiveness of different reasoning strategies for scientific problem-solving
Innovation

Methods, ideas, or system contributions that make the work stand out.

MatSciBench introduces comprehensive college-level materials science benchmark
Benchmark includes structured taxonomy and three-tier difficulty classification
Systematically evaluates multiple reasoning strategies and multimodal contexts
J
Junkai Zhang
University of California, Los Angeles
Jingru Gan
Jingru Gan
University of California, Los Angeles
AI for Materials ScienceVQA
X
Xiaoxuan Wang
University of California, Los Angeles
Z
Zian Jia
Princeton University
C
Changquan Gu
University of California, Los Angeles
Jianpeng Chen
Jianpeng Chen
Virginia Tech
Y
Yanqiao Zhu
University of California, Los Angeles
Mingyu Derek Ma
Mingyu Derek Ma
Prescient Design, Genentech/Roche
Generative Language ModelsLLM AgentsNatural Language ProcessingMachine LearningAI4Science
D
Dawei Zhou
Virginia Tech
L
Ling Li
University of Pennsylvania
W
Wei Wang
University of California, Los Angeles