ChemDFM: A Large Language Foundation Model for Chemistry

📅 2024-01-26
📈 Citations: 12
Influential: 3
📄 PDF
🤖 AI Summary
Existing AI models exhibit limited generalization across chemical tasks, while general-purpose large language models (LLMs) lack domain-specific chemical knowledge. Method: We introduce ChemLLM, the first open-source large language foundation model tailored for chemistry. Built upon the Transformer architecture, ChemLLM is pretrained on 34 billion tokens from scientific literature and textbooks, then fine-tuned with 2.7 million high-quality chemistry-specific instruction-response pairs and aligned for dialogue via supervised fine-tuning and reinforcement learning from human feedback. Contribution/Results: ChemLLM achieves state-of-the-art performance across multiple standardized chemical reasoning benchmarks—outperforming all mainstream open-source LLMs and surpassing GPT-4 on most metrics. To foster reproducible and scalable research, we publicly release the model weights, inference code, and evaluation datasets under permissive licenses.

Technology Category

Application Category

📝 Abstract
Artificial intelligence (AI) has played an increasingly important role in chemical research. However, most models currently used in chemistry are specialist models that require training and tuning for specific tasks. A more generic and efficient solution would be an AI model that could address many tasks and support free-form dialogue in the broad field of chemistry. In its utmost form, such a generalist AI chemist could be referred to as Chemical General Intelligence. Large language models (LLMs) have recently logged tremendous success in the general domain of natural language processing, showing emerging task generalization and free-form dialogue capabilities. However, domain knowledge of chemistry is largely missing when training general-domain LLMs. The lack of such knowledge greatly hinders the performance of generalist LLMs in the field of chemistry. To this end, we develop ChemDFM, a pioneering LLM for chemistry trained on 34B tokens from chemical literature and textbooks, and fine-tuned using 2.7M instructions. As a result, it can understand and reason with chemical knowledge in free-form dialogue. Quantitative evaluations show that ChemDFM significantly surpasses most representative open-source LLMs. It outperforms GPT-4 on a great portion of chemical tasks, despite the substantial size difference. We have open-sourced the inference codes, evaluation datasets, and model weights of ChemDFM on Huggingface (https://huggingface.co/OpenDFM/ChemDFM-v1.0-13B).
Problem

Research questions and friction points this paper is trying to address.

Lack of chemical knowledge in general-domain LLMs
Need for a versatile AI chemist model
Specialist models require task-specific training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Developed ChemDFM as a chemistry-specialized LLM
Trained on 34B tokens from chemical literature
Fine-tuned with 2.7M instructions for reasoning
🔎 Similar Papers
No similar papers found.
Zihan Zhao
Zihan Zhao
Shanghai Jiao Tong University
NLP
Da Ma
Da Ma
Assistant Professor, School of Medicine, Wake Forest University
Medical Image ComputingComputational NeuroanatomyRadiogenomicsNeurodegenerative Disease
L
Lu Chen
X-LANCE Lab, Department of Computer Science and Engineering, MoE Key Lab of Artificial Intelligence, SJTU AI Institute, Shanghai Jiao Tong University, Shanghai, China; Suzhou Laboratory, Suzhou, China
Liangtai Sun
Liangtai Sun
Master, Shanghai Jiao Tong University
NLPGUI understandingMulti-modal
Z
Zihao Li
Shanghai Key Laboratory for Molecular Engineering of Chiral Drugs, School of Chemistry and Chemical Engineering, Shanghai Jiao Tong University, Shanghai, China
Hongshen Xu
Hongshen Xu
Shanghai Jiao Tong University
Natural Language ProcessingLarge Language ModelLLM Alignment
Zichen Zhu
Zichen Zhu
Shanghai Jiao Tong University
GUI智能体,多模态大模型,人机交互
S
Su Zhu
AI Speech Co., Ltd., Suzhou, China
Shuai Fan
Shuai Fan
AI Speech Co., Ltd., Suzhou, China
Guodong Shen
Guodong Shen
University of Warwick
Video anomaly detectionComputer vision
X
Xin Chen
Suzhou Laboratory, Suzhou, China
K
Kai Yu
X-LANCE Lab, Department of Computer Science and Engineering, MoE Key Lab of Artificial Intelligence, SJTU AI Institute, Shanghai Jiao Tong University, Shanghai, China; Suzhou Laboratory, Suzhou, China