🤖 AI Summary
Conventional data attribution methods overlook the inherent stochasticity in deep learning training—such as random parameter initialization and stochastic minibatch sampling—and focus solely on the mean of model outputs, failing to characterize output distributional variability across different training runs.
Method: We propose distributed Training Data Attribution (d-TDA), the first framework to extend data attribution from point estimates to full predictive distributions, quantifying how individual samples influence distributional properties (e.g., variance, tail probabilities). Technically, we formulate a distribution-aware attribution model based on stochastic computational graphs and employ unrolled differentiation coupled with asymptotic analysis to show that influence functions (IFs) naturally arise as the limiting case of d-TDA under infinitesimal distributional perturbations—thereby providing novel theoretical justification for IFs’ applicability in non-convex deep networks.
Results: Experiments demonstrate that d-TDA accurately identifies samples that significantly perturb distributional characteristics, while explicitly delineating the validity boundaries and failure regimes of IF-based attribution.
📝 Abstract
Randomness is an unavoidable part of training deep learning models, yet something that traditional training data attribution algorithms fail to rigorously account for. They ignore the fact that, due to stochasticity in the initialisation and batching, training on the same dataset can yield different models. In this paper, we address this shortcoming through introducing distributional training data attribution (d-TDA), the goal of which is to predict how the distribution of model outputs (over training runs) depends upon the dataset. We demonstrate the practical significance of d-TDA in experiments, e.g. by identifying training examples that drastically change the distribution of some target measurement without necessarily changing the mean. Intriguingly, we also find that influence functions (IFs), a popular but poorly-understood data attribution tool, emerge naturally from our distributional framework as the limit to unrolled differentiation; without requiring restrictive convexity assumptions. This provides a new mathematical motivation for their efficacy in deep learning, and helps to characterise their limitations.