A General Approach for Determining Applicability Domain of Machine Learning Models

📅 2024-05-28
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the lack of generality and interpretability in applicability domain (AD) estimation for machine learning models. We propose a unified AD assessment framework based on kernel density estimation (KDE), which quantifies the distance of a query sample from the training data distribution in feature space and establishes a quantitative relationship among distance, prediction error, and uncertainty. Chemical prior knowledge is incorporated to calibrate the AD decision threshold. To our knowledge, this is the first method enabling consistent, cross-model and cross-task AD evaluation across diverse models—including random forests (RF), gradient-boosted decision trees (GBDT), and graph neural networks (GNN)—and heterogeneous materials datasets (crystals, molecules, alloys). Experiments demonstrate that large KDE-derived distances strongly correlate with high prediction residuals and elevated uncertainty estimates. An open-source toolkit enables automated in-domain/out-of-domain classification. The implementation and documentation are publicly available.

Technology Category

Application Category

📝 Abstract
Knowledge of the domain of applicability of a machine learning model is essential to ensuring accurate and reliable model predictions. In this work, we develop a new and general approach of assessing model domain and demonstrate that our approach provides accurate and meaningful domain designation across multiple model types and material property data sets. Our approach assesses the distance between data in feature space using kernel density estimation, where this distance provides an effective tool for domain determination. We show that chemical groups considered unrelated based on chemical knowledge exhibit significant dissimilarities by our measure. We also show that high measures of dissimilarity are associated with poor model performance (i.e., high residual magnitudes) and poor estimates of model uncertainty (i.e., unreliable uncertainty estimation). Automated tools are provided to enable researchers to establish acceptable dissimilarity thresholds to identify whether new predictions of their own machine learning models are in-domain versus out-of-domain.
Problem

Research questions and friction points this paper is trying to address.

Determining applicability domain of machine learning models
Assessing data distance in feature space for domain determination
Identifying in-domain versus out-of-domain predictions reliably
Innovation

Methods, ideas, or system contributions that make the work stand out.

Kernel density estimates feature space distances
Automated dissimilarity threshold tools provided
General approach for multiple model types
🔎 Similar Papers
No similar papers found.
L
Lane E. Schultz
University of Wisconsin-Madison, 1500 Engineering Drive, Madison, WI 53706, USA
Y
Yiqi Wang
Carnegie Mellon University, 5000 Forbes Ave, Pittsburgh, PA 15213, USA
R
R. Jacobs
University of Wisconsin-Madison, 1500 Engineering Drive, Madison, WI 53706, USA
Dane Morgan
Dane Morgan
University of Wisconsin, Materials Science
materials