Chemical Language Model Linker: blending text and molecules with modular adapters

📅 2024-10-26
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the text-to-molecule conditional generation task by proposing a lightweight, plug-and-play solution that avoids full multimodal model pretraining. Methodologically, we introduce ChemLML—a modular adapter that bridges pretrained large language models (LLMs) and chemical representation models (e.g., MolFormer), enabling cross-modal alignment within their respective embedding spaces. We adopt SMILES—not SELFIES—as the molecular serialization format and construct a denoised, filtered PubChem benchmark for rigorous generation evaluation. Key contributions include: (1) the first decoupled multimodal integration architecture, enabling seamless integration of arbitrary LLMs; (2) empirical evidence demonstrating SMILES’ superiority over SELFIES in text-guided molecular generation; and (3) experimentally validated generated molecules exhibiting high target-binding affinity and favorable membrane permeability via molecular docking. ChemLML achieves strong performance with only 0.1% parameter fine-tuning across diverse LLM backbones, substantially reducing computational overhead.

Technology Category

Application Category

📝 Abstract
The development of large language models and multi-modal models has enabled the appealing idea of generating novel molecules from text descriptions. Generative modeling would shift the paradigm from relying on large-scale chemical screening to find molecules with desired properties to directly generating those molecules. However, multi-modal models combining text and molecules are often trained from scratch, without leveraging existing high-quality pretrained models. Training from scratch consumes more computational resources and prohibits model scaling. In contrast, we propose a lightweight adapter-based strategy named Chemical Language Model Linker (ChemLML). ChemLML blends the two single domain models and obtains conditional molecular generation from text descriptions while still operating in the specialized embedding spaces of the molecular domain. ChemLML can tailor diverse pretrained text models for molecule generation by training relatively few adapter parameters. We find that the choice of molecular representation used within ChemLML, SMILES versus SELFIES, has a strong influence on conditional molecular generation performance. SMILES is often preferable despite not guaranteeing valid molecules. We raise issues in using the entire PubChem dataset of molecules and their associated descriptions for evaluating molecule generation and provide a filtered version of the dataset as a generation test set. To demonstrate how ChemLML could be used in practice, we generate candidate protein inhibitors and use docking to assess their quality and also generate candidate membrane permeable molecules.
Problem

Research questions and friction points this paper is trying to address.

Bridging text and molecular models with lightweight adapters
Enabling conditional molecule generation from text descriptions
Evaluating molecular representation impact on generation performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lightweight adapter-based strategy for blending models
Utilizes SMILES molecular representation for generation
Generates molecules from text with few adapter parameters
🔎 Similar Papers
No similar papers found.
Yifan Deng
Yifan Deng
University of Wisconsin-Madison
Machine LearningAI for Science
Spencer S. Ericksen
Spencer S. Ericksen
Scientist, University of Wisconsin-Madison
Computational ChemistryDrug Development
A
A. Gitter
Department of Biostatistics and Medical Informatics, University of Wisconsin-Madison, Department of Computer Sciences, University of Wisconsin-Madison, Morgridge Institute for Research