🤖 AI Summary
General-purpose large language models (LLMs) lack intrinsic understanding of molecular structure, rendering them unable to reliably distinguish valid molecules from structurally invalid negative samples—severely limiting their generalization in molecular tasks. Method: We propose the first truly general-purpose molecular LLM, introducing a novel multimodal instruction-tuning paradigm that jointly integrates SMILES strings and molecular graphs, augmented with graph-structure preference optimization. Our approach incorporates contrastive graph preference learning, a structure-aware loss function, and multi-task instruction alignment to endow the model with inherent capabilities for molecular topology reasoning and chemical validity assessment. Contribution/Results: The model achieves state-of-the-art performance across major molecular benchmarks, surpassing general-purpose LLMs and matching or exceeding specialized molecular models. Notably, it demonstrates superior cross-task generalization—particularly in reaction prediction—highlighting its robustness and versatility for diverse molecular AI applications.
📝 Abstract
Recent advances in Large Language Models (LLMs) have motivated the development of general LLMs for molecular tasks. While several studies have demonstrated that fine-tuned LLMs can achieve impressive benchmark performances, they are far from genuine generalist molecular LLMs due to a lack of fundamental understanding of molecular structure. Specifically, when given molecular task instructions, LLMs trained with naive next-token prediction training assign similar likelihood scores to both original and negatively corrupted molecules, revealing their lack of molecular structure understanding that is crucial for reliable and general molecular LLMs. To overcome this limitation and obtain a true generalist molecular LLM, we introduce a novel multi-modal training method based on a thorough multi-modal instruction tuning as well as a molecular structure preference optimization between chosen and rejected graphs. On various molecular benchmarks, the proposed generalist molecular LLM, called Mol-LLM, achieves state-of-the-art performances among generalist LLMs on most tasks, at the same time, surpassing or comparable to state-of-the-art specialist LLMs. Moreover, Mol-LLM also shows superior generalization performances in reaction prediction tasks, demonstrating the effect of the molecular structure understanding for generalization perspective.