Multitask finetuning and acceleration of chemical pretrained models for small molecule drug property prediction

πŸ“… 2025-10-14
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Accurate and efficient prediction of ADMET properties for small-molecule drugs remains challenging due to limited data and model generalizability. Method: We propose an enhanced multi-task fine-tuning framework integrating graph neural networks (GNNs) and graph Transformers, synergizing self-supervised pretraining, multi-task learning, and knowledge-guided supervision to enable effective transfer learning across chemical space. Contribution/Results: To ensure fair evaluation, we construct and publicly release two standardized multi-task ADMET benchmark splits with consistent train/validation/test partitions. We also open-source KERMTβ€”an optimized implementation achieving substantial speedups in both training and inference. Extensive experiments demonstrate that our approach significantly outperforms non-pretrained baselines across multiple critical ADMET prediction tasks, especially under large-scale data regimes, enabling industrial-grade large-scale pretraining and low-latency inference.

Technology Category

Application Category

πŸ“ Abstract
Chemical pretrained models, sometimes referred to as foundation models, are receiving considerable interest for drug discovery applications. The general chemical knowledge extracted from self-supervised training has the potential to improve predictions for critical drug discovery endpoints, including on-target potency and ADMET properties. Multi-task learning has previously been successfully leveraged to improve predictive models. Here, we show that enabling multitasking in finetuning of chemical pretrained graph neural network models such as Kinetic GROVER Multi-Task (KERMT), an enhanced version of the GROVER model, and Knowledge-guided Pre-training of Graph Transformer (KGPT) significantly improves performance over non-pretrained graph neural network models. Surprisingly, we find that the performance improvement from finetuning KERMT in a multitask manner is most significant at larger data sizes. Additionally, we publish two multitask ADMET data splits to enable more accurate benchmarking of multitask deep learning methods for drug property prediction. Finally, we provide an accelerated implementation of the KERMT model on GitHub, unlocking large-scale pretraining, finetuning, and inference in industrial drug discovery workflows.
Problem

Research questions and friction points this paper is trying to address.

Improving small molecule drug property prediction accuracy
Enhancing chemical pretrained models via multitask finetuning
Accelerating drug discovery workflows through optimized implementations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multitask finetuning of pretrained chemical graph neural networks
Enhanced models KERMT and KGPT improve drug property prediction
Accelerated implementation enables industrial-scale drug discovery workflows
πŸ”Ž Similar Papers
No similar papers found.
M
Matthew Adrian
Modeling & Informatics, Merck & Co., Inc., 213 E. Grand Ave., South San Francisco, California, 94080, USA.
Y
Yunsie Chung
Modeling & Informatics, Merck & Co., Inc., 213 E. Grand Ave., South San Francisco, California, 94080, USA.
K
Kevin Boyd
BioNeMo, NVIDIA, 2788 San Tomas Expressway, Santa Clara, California, 95051, USA.
S
Saee Paliwal
BioNeMo, NVIDIA, 2788 San Tomas Expressway, Santa Clara, California, 95051, USA.
Srimukh Prasad Veccham
Srimukh Prasad Veccham
Nvidia Corporation
Drug DiscoveryMachine LearningTheoretical ChemistryElectronic Structure Theory
A
Alan C. Cheng
Modeling & Informatics, Merck & Co., Inc., 213 E. Grand Ave., South San Francisco, California, 94080, USA.