Distributional Training Data Attribution

📅 2025-06-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Conventional data attribution methods overlook the inherent stochasticity in deep learning training—such as random parameter initialization and stochastic minibatch sampling—and focus solely on the mean of model outputs, failing to characterize output distributional variability across different training runs. Method: We propose distributed Training Data Attribution (d-TDA), the first framework to extend data attribution from point estimates to full predictive distributions, quantifying how individual samples influence distributional properties (e.g., variance, tail probabilities). Technically, we formulate a distribution-aware attribution model based on stochastic computational graphs and employ unrolled differentiation coupled with asymptotic analysis to show that influence functions (IFs) naturally arise as the limiting case of d-TDA under infinitesimal distributional perturbations—thereby providing novel theoretical justification for IFs’ applicability in non-convex deep networks. Results: Experiments demonstrate that d-TDA accurately identifies samples that significantly perturb distributional characteristics, while explicitly delineating the validity boundaries and failure regimes of IF-based attribution.

Technology Category

Application Category

📝 Abstract
Randomness is an unavoidable part of training deep learning models, yet something that traditional training data attribution algorithms fail to rigorously account for. They ignore the fact that, due to stochasticity in the initialisation and batching, training on the same dataset can yield different models. In this paper, we address this shortcoming through introducing distributional training data attribution (d-TDA), the goal of which is to predict how the distribution of model outputs (over training runs) depends upon the dataset. We demonstrate the practical significance of d-TDA in experiments, e.g. by identifying training examples that drastically change the distribution of some target measurement without necessarily changing the mean. Intriguingly, we also find that influence functions (IFs), a popular but poorly-understood data attribution tool, emerge naturally from our distributional framework as the limit to unrolled differentiation; without requiring restrictive convexity assumptions. This provides a new mathematical motivation for their efficacy in deep learning, and helps to characterise their limitations.
Problem

Research questions and friction points this paper is trying to address.

Account for randomness in deep learning training data attribution
Predict distribution of model outputs based on training dataset
Understand influence functions within a distributional framework
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces distributional training data attribution (d-TDA)
Predicts model output distribution over training runs
Links influence functions to distributional framework naturally
🔎 Similar Papers
No similar papers found.
B
Bruno Mlodozeniec
University of Cambridge, Max Planck Institute for Intelligent Systems
Isaac Reid
Isaac Reid
PhD student, University of Cambridge
Machine learningInferenceStatistical physics
S
Sam Power
University of Cambridge
D
David Krueger
Mila - Quebec AI Institute
M
Murat Erdogdu
University of Toronto, Vector Institute
R
Richard E. Turner
University of Cambridge, Alan Turing Institute
Roger Grosse
Roger Grosse
Associate Professor, University of Toronto
Machine learning