Combining GCN Structural Learning with LLM Chemical Knowledge for or Enhanced Virtual Screening

📅 2025-04-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional virtual screening methods suffer from information loss and bias due to predefined molecular representations. To address this, we propose a GCN-LLM hybrid architecture that dynamically concatenates precomputed LLM-derived chemical embeddings at each GCN layer, enabling continuous infusion of global chemical knowledge into deep graph-structured learning—without invoking the LLM during training or inference, thus preserving both modeling capacity and computational efficiency. This work introduces the first lightweight, static integration of LLM semantic representations into the GCN layer-wise propagation mechanism. Evaluated on standard drug screening benchmarks, our method achieves an F1-score of 88.8%, outperforming standalone GCN (87.9%), XGBoost (85.5%), and SVM (85.4%), thereby demonstrating the efficacy of synergistically combining structural learning with chemical prior knowledge.

Technology Category

Application Category

📝 Abstract
Virtual screening plays a critical role in modern drug discovery by enabling the identification of promising candidate molecules for experimental validation. Traditional machine learning methods such as support vector machines (SVM) and XGBoost rely on predefined molecular representations, often leading to information loss and potential bias. In contrast, deep learning approaches-particularly Graph Convolutional Networks (GCNs)-offer a more expressive and unbiased alternative by operating directly on molecular graphs. Meanwhile, Large Language Models (LLMs) have recently demonstrated state-of-the-art performance in drug design, thanks to their capacity to capture complex chemical patterns from large-scale data via attention mechanisms. In this paper, we propose a hybrid architecture that integrates GCNs with LLM-derived embeddings to combine localized structural learning with global chemical knowledge. The LLM embeddings can be precomputed and stored in a molecular feature library, removing the need to rerun the LLM during training or inference and thus maintaining computational efficiency. We found that concatenating the LLM embeddings after each GCN layer-rather than only at the final layer-significantly improves performance, enabling deeper integration of global context throughout the network. The resulting model achieves superior results, with an F1-score of (88.8%), outperforming standalone GCN (87.9%), XGBoost (85.5%), and SVM (85.4%) baselines.
Problem

Research questions and friction points this paper is trying to address.

Enhance virtual screening with GCN and LLM integration
Overcome bias in traditional molecular representation methods
Improve drug discovery via hybrid deep learning architecture
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines GCN structural learning with LLM embeddings
Precomputes LLM embeddings for computational efficiency
Integrates global context via layer-wise LLM concatenation
🔎 Similar Papers
No similar papers found.
R
Radia Berreziga
Department of Computer Science ,University USTHB Algiers, Algeria
Mohammed Brahimi
Mohammed Brahimi
Technical University of Munich
3D Computer Vision
K
K. Kraim
Laboratory of Physical Chemistry and Biological Materials ,Algiers, Algeria
H
Hamid Azzoune
Department of Computer Science ,University USTHB Algiers, Algeria