🤖 AI Summary
Existing AI models exhibit limited generalization across chemical tasks, while general-purpose large language models (LLMs) lack domain-specific chemical knowledge. Method: We introduce ChemLLM, the first open-source large language foundation model tailored for chemistry. Built upon the Transformer architecture, ChemLLM is pretrained on 34 billion tokens from scientific literature and textbooks, then fine-tuned with 2.7 million high-quality chemistry-specific instruction-response pairs and aligned for dialogue via supervised fine-tuning and reinforcement learning from human feedback. Contribution/Results: ChemLLM achieves state-of-the-art performance across multiple standardized chemical reasoning benchmarks—outperforming all mainstream open-source LLMs and surpassing GPT-4 on most metrics. To foster reproducible and scalable research, we publicly release the model weights, inference code, and evaluation datasets under permissive licenses.
📝 Abstract
Artificial intelligence (AI) has played an increasingly important role in chemical research. However, most models currently used in chemistry are specialist models that require training and tuning for specific tasks. A more generic and efficient solution would be an AI model that could address many tasks and support free-form dialogue in the broad field of chemistry. In its utmost form, such a generalist AI chemist could be referred to as Chemical General Intelligence. Large language models (LLMs) have recently logged tremendous success in the general domain of natural language processing, showing emerging task generalization and free-form dialogue capabilities. However, domain knowledge of chemistry is largely missing when training general-domain LLMs. The lack of such knowledge greatly hinders the performance of generalist LLMs in the field of chemistry. To this end, we develop ChemDFM, a pioneering LLM for chemistry trained on 34B tokens from chemical literature and textbooks, and fine-tuned using 2.7M instructions. As a result, it can understand and reason with chemical knowledge in free-form dialogue. Quantitative evaluations show that ChemDFM significantly surpasses most representative open-source LLMs. It outperforms GPT-4 on a great portion of chemical tasks, despite the substantial size difference. We have open-sourced the inference codes, evaluation datasets, and model weights of ChemDFM on Huggingface (https://huggingface.co/OpenDFM/ChemDFM-v1.0-13B).