🤖 AI Summary
Predicting size-extensive molecular properties (e.g., energy, polarizability) across scales—particularly for molecules larger than those in the training set—remains a critical challenge in molecular machine learning.
Method: We propose an unsupervised training set selection method based on integer linear programming (ILP), the first to formulate molecular subset selection as an ILP problem. Unlike conventional diversity- or coverage-driven strategies, our approach imposes atom-level local environment similarity constraints to ensure systematic coverage of local chemical motifs while guaranteeing globally optimal subset selection. It integrates physics-informed atomic feature embeddings with ILP-based optimization.
Results: The selected training sets significantly improve model generalization to unseen, especially out-of-distribution, large molecules. Experiments across multiple extensive property prediction tasks demonstrate substantial gains over state-of-the-art unsupervised selection baselines, with high computational efficiency and inherent theoretical interpretability.
📝 Abstract
Integer linear programming (ILP) is an elegant approach to solve linear optimization problems, naturally described using integer decision variables. Within the context of physics-inspired machine learning applied to chemistry, we demonstrate the relevance of an ILP formulation to select molecular training sets for predictions of size-extensive properties. We show that our algorithm outperforms existing unsupervised training set selection approaches, especially when predicting properties of molecules larger than those present in the training set. We argue that the reason for the improved performance is due to the selection that is based on the notion of local similarity (i.e., per-atom) and a unique ILP approach that finds optimal solutions efficiently. Altogether, this work provides a practical algorithm to improve the performance of physics-inspired machine learning models and offers insights into the conceptual differences with existing training set selection approaches.