đ¤ AI Summary
This paper addresses data valuationâthe quantification of individual data pointsâ influence on model performance in machine learning. We propose an efficient, interpretable valuation framework based on Gaussian processes (GPs). Methodologically, we introduce GPs for the first time to model submodel utility, integrating Bayesian inference with a canonical decomposition of utility functions to enable rapid, incremental estimation of sample contributions. Our key contributions are: (1) the first interpretable decomposition framework for data valuation, explicitly attributing influence to individual samples and their interactions; and (2) leveraging GPs to jointly ensure theoretical rigorâvia probabilistic modeling of utility uncertaintyâand computational efficiencyâthrough closed-form posterior updates. Experiments demonstrate that our approach accelerates valuation by over an order of magnitude compared to full retraining, while achieving high rank correlation with Leave-One-Out and other baselines in influence ranking. The framework thus enables near real-time, principled assessment of data value.
đ Abstract
In machine learning, knowing the impact of a given datum on model training is a fundamental task referred to as Data Valuation. Building on previous works from the literature, we have designed a novel canonical decomposition allowing practitioners to analyze any data valuation method as the combination of two parts: a utility function that captures characteristics from a given model and an aggregation procedure that merges such information. We also propose to use Gaussian Processes as a means to easily access the utility function on ``sub-models'', which are models trained on a subset of the training set. The strength of our approach stems from both its theoretical grounding in Bayesian theory, and its practical reach, by enabling fast estimation of valuations thanks to efficient update formulae.