Generalization Beyond Benchmarks: Evaluating Learnable Protein-Ligand Scoring Functions on Unseen Targets

📅 2025-12-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the critical challenge of **generalization to unseen protein targets** for learnable protein–ligand scoring functions, revealing that standard benchmarks (e.g., PDBbind) severely overestimate real-world performance due to target overlap and data leakage. To rectify this, we propose a **rigorous unseen-target evaluation paradigm**, constructing target-level splits grounded in realistic scarcity of structural and affinity data. Methodologically, we integrate **large-scale self-supervised molecular pretraining** with **lightweight few-shot fine-tuning**, substantially enhancing cross-target extrapolation. Experiments show that state-of-the-art scoring functions suffer >40% average performance degradation on truly unseen targets—validating the misleading nature of conventional benchmarks. In contrast, our approach achieves significant generalization gains using only 1–5 target-specific samples for fine-tuning. This establishes a reliable, AI-driven scoring foundation for de novo drug discovery against novel therapeutic targets.

Technology Category

Application Category

📝 Abstract
As machine learning becomes increasingly central to molecular design, it is vital to ensure the reliability of learnable protein-ligand scoring functions on novel protein targets. While many scoring functions perform well on standard benchmarks, their ability to generalize beyond training data remains a significant challenge. In this work, we evaluate the generalization capability of state-of-the-art scoring functions on dataset splits that simulate evaluation on targets with a limited number of known structures and experimental affinity measurements. Our analysis reveals that the commonly used benchmarks do not reflect the true challenge of generalizing to novel targets. We also investigate whether large-scale self-supervised pretraining can bridge this generalization gap and we provide preliminary evidence of its potential. Furthermore, we probe the efficacy of simple methods that leverage limited test-target data to improve scoring function performance. Our findings underscore the need for more rigorous evaluation protocols and offer practical guidance for designing scoring functions with predictive power extending to novel protein targets.
Problem

Research questions and friction points this paper is trying to address.

Evaluating protein-ligand scoring functions on novel targets
Assessing generalization beyond standard training benchmarks
Investigating methods to improve performance on unseen proteins
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluates scoring functions on unseen protein targets
Investigates self-supervised pretraining for generalization
Probes simple methods using limited test data
🔎 Similar Papers
No similar papers found.
J
Jakub Kopko
CIIRC, Faculty of Electrical Engineering, Czech Technical University in Prague
D
David Graber
Seminar for Applied Mathematics, Department of Mathematics, ETH Zurich
S
Saltuk Mustafa Eyrilmez
Loschmidt Laboratories, Faculty of Science, Masaryk University, Brno
S
Stanislav Mazurenko
Loschmidt Laboratories, Faculty of Science, Masaryk University, Brno
D
David Bednar
Loschmidt Laboratories, Faculty of Science, Masaryk University, Brno
J
Jiri Sedlar
CIIRC, Faculty of Electrical Engineering, Czech Technical University in Prague
Josef Sivic
Josef Sivic
Czech Technical University, CIIRC, ELLIS Unit Prague
computer visionmachine learning