🤖 AI Summary
To address structural feature loss from parameter sharing and catastrophic forgetting induced by fine-tuning in molecular graph–large language model (LLM) integration, this paper proposes an instance-aware dynamic low-rank adaptation mechanism. The pre-trained LLM is frozen, while a graph neural network encodes molecular structures to generate input-specific low-rank adapter weights in real time, which are dynamically injected into the LLM’s feed-forward layers. This approach circumvents limitations of static fine-tuning and fixed adapters, enabling— for the first time—molecular structure-driven personalized parameter generation. It preserves the LLM’s general reasoning capability while substantially enhancing molecular awareness. Experiments demonstrate significant improvements over state-of-the-art baselines: +14.1% exact match rate in chemical reaction prediction and 22% reduction in prediction error for quantum property estimation.
📝 Abstract
Effectively integrating molecular graph structures with Large Language Models (LLMs) is a key challenge in drug discovery. Most existing multi-modal alignment methods typically process these structures by fine-tuning the LLM or adding a static adapter simultaneously. However, these approaches have two main limitations: (1) it optimizes a shared parameter space across all molecular inputs, limiting the model's ability to capture instance-specific structural features; and (2) fine-tuning the LLM for molecular tasks can lead to catastrophic forgetting, undermining its general reasoning capabilities. In this paper, instead of static task-oriented adaptation, we propose an instance-specific parameter space alignment approach for each molecule on-the-fly. To this end, we introduce Molecule-aware Low-Rank Adaptation (MoRA) that produces a unique set of low-rank adaptation weights for each input molecular graph. These weights are then dynamically injected into a frozen LLM, allowing the model to adapt its reasoning to the structure of each molecular input, while preserving the LLM's core knowledge. Extensive experiments demonstrate that on key molecular tasks, such as chemical reaction prediction and molecular captioning, MoRA's instance-specific dynamic adaptation outperforms statically adapted baselines, including a 14.1% relative improvement in reaction prediction exact match and a 22% reduction in error for quantum property prediction. The code is available at https://github.com/jk-sounds/MoRA.