Fine-Tuned Language Models Generate Stable Inorganic Materials as Text

📅 2024-02-06
🏛️ International Conference on Learning Representations
📈 Citations: 67
Influential: 7
📄 PDF
🤖 AI Summary
Traditional generative models struggle to satisfy fundamental physical constraints—such as atomic position consistency and charge neutrality—in inorganic crystal structure generation. Method: We propose a physics-informed LLM-based approach that textually encodes crystal structures and fine-tunes LLaMA-2 70B for unconditional generation, structural completion, and text-guided multimodal generation. Crucially, we leverage the model’s inherent inductive bias for symmetry modeling, coupled with ML interatomic potential (M3GNet) filtering and DFT-based energy validation via convex hull analysis. Contribution/Results: This work is the first to demonstrate that pre-trained LLMs naturally capture crystallographic symmetries. Experiments show 90% of generated structures satisfy basic physical constraints; 49% are metastable—substantially outperforming CDVAE (28%). Moreover, scaling the model significantly improves its capacity to model space groups and translational symmetry, establishing LLMs as viable, controllable, and physically grounded tools for crystal structure discovery.

Technology Category

Application Category

📝 Abstract
We propose fine-tuning large language models for generation of stable materials. While unorthodox, fine-tuning large language models on text-encoded atomistic data is simple to implement yet reliable, with around 90% of sampled structures obeying physical constraints on atom positions and charges. Using energy above hull calculations from both learned ML potentials and gold-standard DFT calculations, we show that our strongest model (fine-tuned LLaMA-2 70B) can generate materials predicted to be metastable at about twice the rate (49% vs 28%) of CDVAE, a competing diffusion model. Because of text prompting's inherent flexibility, our models can simultaneously be used for unconditional generation of stable material, infilling of partial structures and text-conditional generation. Finally, we show that language models' ability to capture key symmetries of crystal structures improves with model scale, suggesting that the biases of pretrained LLMs are surprisingly well-suited for atomistic data.
Problem

Research questions and friction points this paper is trying to address.

Generate stable inorganic materials using fine-tuned language models
Improve metastable material generation rate compared to diffusion models
Enable flexible text prompting for various material generation tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-tuned LLMs generate stable inorganic materials
Text-encoded atomistic data ensures physical constraints
LLMs outperform diffusion models in metastable generation
🔎 Similar Papers
No similar papers found.