Benchmarking community drug response prediction models: datasets, models, tools, and metrics for cross-dataset generalization analysis

📅 2025-03-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current drug response prediction (DRP) models lack standardized evaluation of cross-dataset generalization, hindering fair and reproducible assessment. Method: We introduce the first open-source benchmark framework for DRP generalization evaluation, integrating five public drug screening datasets and six uniformly implemented models—RF, XGBoost, GCN, MPNN, DeepSynergy, and DNN—and proposing a novel evaluation metric system balancing absolute performance (R², RMSE) and relative degradation (ΔR²). Contribution/Results: Empirical analysis reveals that all models suffer >40% average R² degradation across datasets; CTRPv2 emerges as the optimal source dataset, yielding highest generalization scores on most target datasets; MPNN and DeepSynergy demonstrate relatively robust cross-dataset performance. The framework supports multi-source data fusion and transfer-based evaluation, enabling rigorous, transparent, and reproducible generalization assessment in DRP research.

Technology Category

Application Category

📝 Abstract
Deep learning (DL) and machine learning (ML) models have shown promise in drug response prediction (DRP), yet their ability to generalize across datasets remains an open question, raising concerns about their real-world applicability. Due to the lack of standardized benchmarking approaches, model evaluations and comparisons often rely on inconsistent datasets and evaluation criteria, making it difficult to assess true predictive capabilities. In this work, we introduce a benchmarking framework for evaluating cross-dataset prediction generalization in DRP models. Our framework incorporates five publicly available drug screening datasets, six standardized DRP models, and a scalable workflow for systematic evaluation. To assess model generalization, we introduce a set of evaluation metrics that quantify both absolute performance (e.g., predictive accuracy across datasets) and relative performance (e.g., performance drop compared to within-dataset results), enabling a more comprehensive assessment of model transferability. Our results reveal substantial performance drops when models are tested on unseen datasets, underscoring the importance of rigorous generalization assessments. While several models demonstrate relatively strong cross-dataset generalization, no single model consistently outperforms across all datasets. Furthermore, we identify CTRPv2 as the most effective source dataset for training, yielding higher generalization scores across target datasets. By sharing this standardized evaluation framework with the community, our study aims to establish a rigorous foundation for model comparison, and accelerate the development of robust DRP models for real-world applications.
Problem

Research questions and friction points this paper is trying to address.

Evaluate cross-dataset generalization in drug response prediction models.
Standardize benchmarking for consistent model evaluation and comparison.
Assess model transferability using comprehensive performance metrics.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Standardized benchmarking framework for DRP models
Incorporates five drug screening datasets and six models
Introduces metrics for cross-dataset generalization assessment
🔎 Similar Papers
No similar papers found.
A
A. Partin
Division of Data Science and Learning, Argonne National Laboratory, Lemont, IL, USA
P
Priyanka Vasanthakumari
Division of Data Science and Learning, Argonne National Laboratory, Lemont, IL, USA
O
Oleksandr Narykov
Division of Data Science and Learning, Argonne National Laboratory, Lemont, IL, USA
Andreas Wilke
Andreas Wilke
Argonne National Laboratory
Computer ScienceBiologyMetagenomics
N
Natasha Koussa
Frederick National Laboratory for Cancer Research, Cancer Data Science Initiatives, Cancer Research Technology Program, Frederick, MD, USA
S
Sara E. Jones
Frederick National Laboratory for Cancer Research, Cancer Data Science Initiatives, Cancer Research Technology Program, Frederick, MD, USA
Y
Yitan Zhu
Division of Data Science and Learning, Argonne National Laboratory, Lemont, IL, USA
J
Jamie C. Overbeek
Division of Data Science and Learning, Argonne National Laboratory, Lemont, IL, USA
Rajeev Jain
Rajeev Jain
Division of Data Science and Learning, Argonne National Laboratory, Lemont, IL, USA
G
G. Fernando
Department of Statistics, University of Nebraska–Lincoln, Lincoln, NE, USA
C
Cesar Sanchez-Villalobos
Department of Electrical & Computer Engineering, Texas Tech University, Lubbock, TX, USA
Cristina Garcia-Cardona
Cristina Garcia-Cardona
Scientist, Los Alamos National Laboratory
Machine learningenergy minimizationcomputer simulationparallel computing
J
J. Mohd-Yusof
Division of Computer, Computational and Statistical Sciences, Los Alamos National Laboratory, Los Alamos, NM, USA
Nicholas Chia
Nicholas Chia
Data Science and Learning
Large Language ModelsReinforcement LearningComplex SystemsMicrobiome
J
Justin M. Wozniak
Division of Data Science and Learning, Argonne National Laboratory, Lemont, IL, USA
S
Souparno Ghosh
Department of Statistics, University of Nebraska–Lincoln, Lincoln, NE, USA
R
R. Pal
Department of Electrical & Computer Engineering, Texas Tech University, Lubbock, TX, USA
T
Thomas S. Brettin
Division of Data Science and Learning, Argonne National Laboratory, Lemont, IL, USA
M
M. R. Weil
Frederick National Laboratory for Cancer Research, Cancer Data Science Initiatives, Cancer Research Technology Program, Frederick, MD, USA
R
Rick L. Stevens
Division of Data Science and Learning, Argonne National Laboratory, Lemont, IL, USA; Department of Computer Science, The University of Chicago, Chicago, IL, USA