MolX: Enhancing Large Language Models for Molecular Learning with A Multi-Modal Extension

📅 2024-06-10
🏛️ arXiv.org
📈 Citations: 8
Influential: 2
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit limited molecular structural understanding—especially when relying solely on one-dimensional textual representations like SMILES—hindering their effectiveness in chemistry. Method: We propose MolX, a lightweight multimodal extension module that jointly encodes SMILES sequences, 2D molecular graphs (via GNNs), and expert-crafted molecular fingerprints. MolX is trained via multitask contrastive learning while keeping the LLM backbone frozen. Contribution/Results: MolX establishes the first “frozen-LLM + multimodal alignment” paradigm, introducing only 0.53%–0.82% additional trainable parameters. It achieves significant improvements over baselines across four downstream tasks—including molecule-to-text translation and retrosynthetic planning—while supporting both zero-shot inference and fine-tuning deployment. This enhances cross-task generalization of LLMs in chemistry without architectural modification or full-parameter adaptation.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) with their strong task-handling capabilities have shown remarkable advancements across a spectrum of fields, moving beyond natural language understanding. However, their proficiency within the chemistry domain remains restricted, especially in solving professional molecule-related tasks. This challenge is attributed to their inherent limitations in comprehending molecules using only common textual representations, i.e., SMILES strings. In this study, we seek to enhance the ability of LLMs to comprehend molecules by equipping them with a multi-modal external module, namely MolX. In particular, instead of directly using a SMILES string to represent a molecule, we utilize specific encoders to extract fine-grained features from both SMILES string and 2D molecular graph representations for feeding into an LLM. Moreover, a handcrafted molecular fingerprint is incorporated to leverage its embedded domain knowledge. Then, to establish an alignment between MolX and the LLM's textual input space, the whole model in which the LLM is frozen, is pre-trained with a versatile strategy including a diverse set of tasks. Experimental evaluations show that our proposed method outperforms baselines across 4 downstream molecule-related tasks ranging from molecule-to-text translation to retrosynthesis, with and without fine-tuning the LLM, while only introducing a small number of trainable parameters 0.53% and 0.82%, respectively.
Problem

Research questions and friction points this paper is trying to address.

Enhancing LLMs for molecular learning with multi-modal inputs
Overcoming SMILES limitations in molecular representation
Improving performance on molecule-related tasks with minimal parameters
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-modal extension for molecular learning
Encoders extract features from SMILES and graphs
Pre-trained with diverse tasks for alignment
K
Khiem Le
University of Notre Dame, IN, USA
Zhichun Guo
Zhichun Guo
Postdoc@IPD, UW; CS Ph.D.@ND
Machine LearningArtificial IntelligenceAI4Science
K
Kaiwen Dong
University of Notre Dame, IN, USA
X
Xiaobao Huang
University of Notre Dame, IN, USA
B
B. Nan
University of Notre Dame, IN, USA
R
Roshni G. Iyer
University of California, Los Angeles, CA, USA
Xiangliang Zhang
Xiangliang Zhang
Leonard C. Bettex Collegiate Professor, Computer Science and Engineering, University of Notre Dame
Machine LearningAI for Science
Olaf Wiest
Olaf Wiest
University of Notre Dame
reaction mechanismscomputational medicinal and organic chemistry
W
Wei Wang
University of California, Los Angeles, CA, USA
N
Nitesh V. Chawla
University of Notre Dame, IN, USA