🤖 AI Summary
Large language models (LLMs) exhibit fundamental “geometric blindness” in predicting structure-dependent material properties (e.g., formation energy, elastic moduli), stemming from their inability to effectively model continuous geometric information—such as atomic coordinates—while excelling only at discrete, categorical pattern recognition; this limitation persists regardless of model scale, data volume, or conventional tokenization strategies.
Method: The authors introduce MatText, the first multi-representation, multi-task textual benchmark for materials science, incorporating nine crystal structure tokenization schemes—including novel physics-informed representations—and develop geometric sensitivity probes and a structure-to-text auto-conversion toolkit.
Results: Systematic evaluation reveals that state-of-the-art LLMs predominantly rely on local statistical cues and fail to encode global geometric relationships. While new representations improve local feature utilization, they do not overcome the core geometric modeling bottleneck, underscoring the necessity of dedicated geometric architectures for materials intelligence.
📝 Abstract
Effectively representing materials as text has the potential to leverage the vast advancements of large language models (LLMs) for discovering new materials. While LLMs have shown remarkable success in various domains, their application to materials science remains underexplored. A fundamental challenge is the lack of understanding of how to best utilize text-based representations for materials modeling. This challenge is further compounded by the absence of a comprehensive benchmark to rigorously evaluate the capabilities and limitations of these text representations in capturing the complexity of material systems. To address this gap, we propose MatText, a suite of benchmarking tools and datasets designed to systematically evaluate the performance of language models in modeling materials. MatText encompasses nine distinct text-based representations for material systems, including several novel representations. Each representation incorporates unique inductive biases that capture relevant information and integrate prior physical knowledge about materials. Additionally, MatText provides essential tools for training and benchmarking the performance of language models in the context of materials science. These tools include standardized dataset splits for each representation, probes for evaluating sensitivity to geometric factors, and tools for seamlessly converting crystal structures into text. Using MatText, we conduct an extensive analysis of the capabilities of language models in modeling materials. Our findings reveal that current language models consistently struggle to capture the geometric information crucial for materials modeling across all representations. Instead, these models tend to leverage local information, which is emphasized in some of our novel representations. Our analysis underscores MatText's ability to reveal shortcomings of text-based methods for materials design.