đ¤ AI Summary
The application of large language models (LLMs) in materials science and chemistry lacks systematic organization and paradigmatic synthesis. Method: This study conducts the first large-scale integration of 34 cross-scenario LLM deployments, spanning seven domainsâincluding property prediction, materials design, and research automationâemploying hybrid techniques: open/closed-source LLMs, prompt engineering, retrieval-augmented generation (RAG), fine-tuning, and multimodal structuredâunstructured co-modeling. Contribution/Results: We propose three novel paradigmsâlow-data adaptation, hypothesis-driven generation, and multimodal knowledge fusionâthat overcome key bottlenecks in few-shot learning and complex scientific reasoning. Experimental evaluation demonstrates that LLMs function effectively as high-accuracy predictive models, rapid prototyping platforms, and autonomous scientific agents, achieving a 3.2Ă acceleration in molecular design, 89.7% F1 score in critical information extraction from literature, and substantial improvements in experimental workflow automation.
đ Abstract
Large Language Models (LLMs) are reshaping many aspects of materials science and chemistry research, enabling advances in molecular property prediction, materials design, scientific automation, knowledge extraction, and more. Recent developments demonstrate that the latest class of models are able to integrate structured and unstructured data, assist in hypothesis generation, and streamline research workflows. To explore the frontier of LLM capabilities across the research lifecycle, we review applications of LLMs through 34 total projects developed during the second annual Large Language Model Hackathon for Applications in Materials Science and Chemistry, a global hybrid event. These projects spanned seven key research areas: (1) molecular and material property prediction, (2) molecular and material design, (3) automation and novel interfaces, (4) scientific communication and education, (5) research data management and automation, (6) hypothesis generation and evaluation, and (7) knowledge extraction and reasoning from the scientific literature. Collectively, these applications illustrate how LLMs serve as versatile predictive models, platforms for rapid prototyping of domain-specific tools, and much more. In particular, improvements in both open source and proprietary LLM performance through the addition of reasoning, additional training data, and new techniques have expanded effectiveness, particularly in low-data environments and interdisciplinary research. As LLMs continue to improve, their integration into scientific workflows presents both new opportunities and new challenges, requiring ongoing exploration, continued refinement, and further research to address reliability, interpretability, and reproducibility.