🤖 AI Summary
Existing scientific foundation models are typically trained in isolation on single domains, limiting unified representation learning and cross-domain collaboration. To address this, we propose NatureLM—the first cross-scientific-domain sequence foundation model—that unifies natural entities—including small molecules, proteins, RNA, and materials—into a shared “natural language” representation space, enabling semantic alignment and cross-modal generation. NatureLM introduces a novel multi-domain joint self-supervised pretraining paradigm, integrating SMILES, FASTA, RNA sequences, and crystallographic data within a Transformer architecture to construct a unified 46.7B-parameter sequence space. The model supports text-guided generation, cross-domain design (e.g., protein-to-molecule), and multi-task scientific reasoning. It achieves state-of-the-art performance on SMILES–IUPAC translation and USPTO-50k retrosynthetic prediction, and demonstrates practical efficacy in end-to-end drug discovery, novel material design, and therapeutic protein/RNA generation.
📝 Abstract
Foundation models have revolutionized natural language processing and artificial intelligence, significantly enhancing how machines comprehend and generate human languages. Inspired by the success of these foundation models, researchers have developed foundation models for individual scientific domains, including small molecules, materials, proteins, DNA, and RNA. However, these models are typically trained in isolation, lacking the ability to integrate across different scientific domains. Recognizing that entities within these domains can all be represented as sequences, which together form the"language of nature", we introduce Nature Language Model (briefly, NatureLM), a sequence-based science foundation model designed for scientific discovery. Pre-trained with data from multiple scientific domains, NatureLM offers a unified, versatile model that enables various applications including: (i) generating and optimizing small molecules, proteins, RNA, and materials using text instructions; (ii) cross-domain generation/design, such as protein-to-molecule and protein-to-RNA generation; and (iii) achieving state-of-the-art performance in tasks like SMILES-to-IUPAC translation and retrosynthesis on USPTO-50k. NatureLM offers a promising generalist approach for various scientific tasks, including drug discovery (hit generation/optimization, ADMET optimization, synthesis), novel material design, and the development of therapeutic proteins or nucleotides. We have developed NatureLM models in different sizes (1 billion, 8 billion, and 46.7 billion parameters) and observed a clear improvement in performance as the model size increases.