🤖 AI Summary
Molecular property prediction (MPP) suffers from insufficient integration of human prior knowledge and large language models’ (LLMs’) knowledge gaps and hallucinations—particularly for sparse properties. Method: We propose a knowledge-enhanced multimodal fusion framework that, for the first time, jointly leverages domain-specific knowledge and executable Python code generated by LLMs (GPT-4o, GPT-4.1, DeepSeek-R1) to construct semantically rich molecular representations; these are then aligned and fused with structural representations extracted by graph neural networks via cross-modal alignment. Our approach requires no LLM fine-tuning, ensuring both interpretability and computational efficiency. Contribution/Results: Evaluated on multiple standard MPP benchmarks, our method significantly outperforms existing state-of-the-art approaches, demonstrating the effectiveness of knowledge-guided representation learning in enhancing generalization, robustness, and adaptability in few-shot settings.
📝 Abstract
Predicting molecular properties is a critical component of drug discovery. Recent advances in deep learning, particularly Graph Neural Networks (GNNs), have enabled end-to-end learning from molecular structures, reducing reliance on manual feature engineering. However, while GNNs and self-supervised learning approaches have advanced molecular property prediction (MPP), the integration of human prior knowledge remains indispensable, as evidenced by recent methods that leverage large language models (LLMs) for knowledge extraction. Despite their strengths, LLMs are constrained by knowledge gaps and hallucinations, particularly for less-studied molecular properties. In this work, we propose a novel framework that, for the first time, integrates knowledge extracted from LLMs with structural features derived from pre-trained molecular models to enhance MPP. Our approach prompts LLMs to generate both domain-relevant knowledge and executable code for molecular vectorization, producing knowledge-based features that are subsequently fused with structural representations. We employ three state-of-the-art LLMs, GPT-4o, GPT-4.1, and DeepSeek-R1, for knowledge extraction. Extensive experiments demonstrate that our integrated method outperforms existing approaches, confirming that the combination of LLM-derived knowledge and structural information provides a robust and effective solution for MPP.