MATRIX: A Multimodal Benchmark and Post-Training Framework for Materials Science

📅 2026-01-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of effective benchmarks for evaluating how visual experimental data enhances mechanistic scientific reasoning, particularly in materials science where multimodal integration of experimental observations and physical theory remains underexplored. To this end, we introduce MATRIX, a multimodal benchmark designed to systematically assess models’ capabilities in fundamental theoretical understanding, research-level reasoning, and interpretation of real multimodal experimental outputs. By comparing text-only and vision-language post-training strategies, we demonstrate— for the first time in materials science—that aligning visual and textual modalities significantly boosts scientific reasoning performance. Experiments show that even with limited multimodal data, visual supervision improves accuracy in experimental interpretation by 10–25% and enhances textual reasoning by 5–16%, with consistent generalization gains observed on ScienceQA and PubMedQA.

Technology Category

Application Category

📝 Abstract
Scientific reasoning in materials science requires integrating multimodal experimental evidence with underlying physical theory. Existing benchmarks make it difficult to assess whether incorporating visual experimental data during post-training improves mechanism-grounded explanation reasoning beyond text-only supervision. We introduce MATRIX, a multimodal benchmark for materials science reasoning that evaluates foundational theory, research-level reasoning, and the interpretation of real experimental artifacts across multiple characterization modalities. Using MATRIX as a controlled diagnostic, we isolate the effect of visual grounding by comparing post-training on structured materials science text alone with post-training that incorporates paired experimental images. Despite using relatively small amounts of multimodal data, visual supervision improves experimental interpretation by 10-25% and yields 5-16% gains on text-only scientific reasoning tasks. Our results demonstrate that these improvements rely on correct image-text alignment during post-training, highlighting cross-modal representational transfer. We also observe consistent improvements on ScienceQA and PubMedQA, demonstrating that the benefits of structured multimodal post-training extend beyond materials science. The MATRIX dataset is available at https://huggingface.co/datasets/radical-ai/MATRIX and the model at https://huggingface.co/radical-ai/MATRIX-PT.
Problem

Research questions and friction points this paper is trying to address.

multimodal benchmark
materials science
scientific reasoning
post-training
visual grounding
Innovation

Methods, ideas, or system contributions that make the work stand out.

multimodal post-training
visual grounding
cross-modal transfer
materials science reasoning
image-text alignment
🔎 Similar Papers
No similar papers found.
D
Delia McGrath
Radical AI
C
Curtis Chong
Radical AI
R
Rohil Kulkarni
Radical AI
Gerbrand Ceder
Gerbrand Ceder
Professor of Materials Science and Engineering
Materials designcomputational modelingenergy storagethermoelectricssolar
A
Adeesh Kolluru
Radical AI