🤖 AI Summary
Data attribution methods exhibit high sensitivity to hyperparameters, and conventional tuning—requiring repeated model retraining—incurs prohibitive computational costs, severely hindering practical deployment. To address this, we conduct the first large-scale empirical study to systematically characterize hyperparameter sensitivity patterns across mainstream approaches (e.g., influence functions). We establish a theoretical framework analyzing how regularization terms affect attribution stability. Building on these insights, we propose the first retraining-free, lightweight strategy for selecting regularization hyperparameters, leveraging validation-set-based proxy evaluation for efficient optimization. Experiments on benchmarks including CIFAR-10 and MNIST demonstrate that our method matches grid search in attribution accuracy while reducing tuning overhead by over 90%. This substantially enhances the practicality and deployability of data attribution in real-world applications.
📝 Abstract
Data attribution methods, which quantify the influence of individual training data points on a machine learning model, have gained increasing popularity in data-centric applications in modern AI. Despite a recent surge of new methods developed in this space, the impact of hyperparameter tuning in these methods remains under-explored. In this work, we present the first large-scale empirical study to understand the hyperparameter sensitivity of common data attribution methods. Our results show that most methods are indeed sensitive to certain key hyperparameters. However, unlike typical machine learning algorithms -- whose hyperparameters can be tuned using computationally-cheap validation metrics -- evaluating data attribution performance often requires retraining models on subsets of training data, making such metrics prohibitively costly for hyperparameter tuning. This poses a critical open challenge for the practical application of data attribution methods. To address this challenge, we advocate for better theoretical understandings of hyperparameter behavior to inform efficient tuning strategies. As a case study, we provide a theoretical analysis of the regularization term that is critical in many variants of influence function methods. Building on this analysis, we propose a lightweight procedure for selecting the regularization value without model retraining, and validate its effectiveness across a range of standard data attribution benchmarks. Overall, our study identifies a fundamental yet overlooked challenge in the practical application of data attribution, and highlights the importance of careful discussion on hyperparameter selection in future method development.