🤖 AI Summary
In the final-model-only setting—where only the trained model is accessible—there remains a lack of reliable, computationally grounded benchmarks for quantifying model sensitivity to individual training samples (i.e., training data attribution, TDA). This work establishes the first verifiable TDA gold standard, built upon controlled fine-tuning and gradient-weighted averaging, enabling principled, model-agnostic attribution. We prove that mainstream gradient-based methods—including Influence Functions and TracIn—are distinct approximations of this standard. Through systematic multimodal evaluation (text, image, tabular data), we uncover two key empirical regularities: first-order gradient methods achieve high initial accuracy but degrade rapidly with training steps; Influence Functions exhibit superior stability yet consistently underperform in precision—challenging prevailing intuitions. These findings yield the first evidence-based guidance for method selection in TDA, bridging theoretical foundations with practical deployment requirements.
📝 Abstract
Training data attribution (TDA) is the task of attributing model behavior to elements in the training data. This paper draws attention to the common setting where one has access only to the final trained model, and not the training algorithm or intermediate information from training. To serve as a gold standard for TDA in this"final-model-only"setting, we propose further training, with appropriate adjustment and averaging, to measure the sensitivity of the given model to training instances. We then unify existing gradient-based methods for TDA by showing that they all approximate the further training gold standard in different ways. We investigate empirically the quality of these gradient-based approximations to further training, for tabular, image, and text datasets and models. We find that the approximation quality of first-order methods is sometimes high but decays with the amount of further training. In contrast, the approximations given by influence function methods are more stable but surprisingly lower in quality.