🤖 AI Summary
Chemical AI models suffer from poor generalization, high task-specific adaptation costs, and limited cross-task transferability. To address these challenges, we introduce ChemFM—the first general-purpose large language model for chemistry—featuring 3.0 billion parameters and pretrained via self-supervised causal language modeling on 178 million molecular SMILES/SELFIES sequences. ChemFM supports both full-parameter fine-tuning and parameter-efficient methods (e.g., LoRA), enabling unified adaptation to three core chemical AI tasks: molecular property prediction, conditional molecular generation, and reaction prediction. On 34 property prediction benchmarks, it achieves an average improvement of 67.48%; reduces conditional generation bias by 33.80%; and improves top-1 reaction prediction accuracy by up to 3.7%. Notably, ChemFM significantly enhances modeling of critical biological properties—including antibiotic activity and cytotoxicity—establishing a scalable foundation model for novel antibiotic discovery.
📝 Abstract
Artificial intelligence (AI) has significantly advanced computational chemistry research in various tasks. However, traditional AI methods often rely on task-specific model designs and training, which constrain both the scalability of model size and generalization across different tasks. Here, we introduce ChemFM, a large foundation model specifically developed for chemicals. ChemFM comprises 3 billion parameters and is pre-trained on 178 million molecules using self-supervised causal language modeling to extract generalizable molecular representations. This model can be adapted to diverse downstream chemical applications using either full-parameter or parameter-efficient fine-tuning methods. ChemFM consistently outperforms state-of-the-art task-specific AI models across all tested tasks. Notably, it achieves up to 67.48% performance improvement across 34 property prediction benchmarks, up to 33.80% reduction in mean average deviation between conditioned and actual properties of generated molecules in conditional molecular generation tasks, and up to 3.7% top-1 accuracy improvement across 4 reaction prediction datasets. Moreover, ChemFM demonstrates its superior performance in predicting antibiotic activity and cytotoxicity, highlighting its potential to advance the discovery of novel antibiotics. We anticipate that ChemFM will significantly advance chemistry research by providing a foundation model capable of effectively generalizing across a broad range of tasks with minimal additional training.