Taming Hyperparameter Sensitivity in Data Attribution: Practical Selection Without Costly Retraining

📅 2025-05-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Data attribution methods exhibit high sensitivity to hyperparameters, and conventional tuning—requiring repeated model retraining—incurs prohibitive computational costs, severely hindering practical deployment. To address this, we conduct the first large-scale empirical study to systematically characterize hyperparameter sensitivity patterns across mainstream approaches (e.g., influence functions). We establish a theoretical framework analyzing how regularization terms affect attribution stability. Building on these insights, we propose the first retraining-free, lightweight strategy for selecting regularization hyperparameters, leveraging validation-set-based proxy evaluation for efficient optimization. Experiments on benchmarks including CIFAR-10 and MNIST demonstrate that our method matches grid search in attribution accuracy while reducing tuning overhead by over 90%. This substantially enhances the practicality and deployability of data attribution in real-world applications.

Technology Category

Application Category

📝 Abstract
Data attribution methods, which quantify the influence of individual training data points on a machine learning model, have gained increasing popularity in data-centric applications in modern AI. Despite a recent surge of new methods developed in this space, the impact of hyperparameter tuning in these methods remains under-explored. In this work, we present the first large-scale empirical study to understand the hyperparameter sensitivity of common data attribution methods. Our results show that most methods are indeed sensitive to certain key hyperparameters. However, unlike typical machine learning algorithms -- whose hyperparameters can be tuned using computationally-cheap validation metrics -- evaluating data attribution performance often requires retraining models on subsets of training data, making such metrics prohibitively costly for hyperparameter tuning. This poses a critical open challenge for the practical application of data attribution methods. To address this challenge, we advocate for better theoretical understandings of hyperparameter behavior to inform efficient tuning strategies. As a case study, we provide a theoretical analysis of the regularization term that is critical in many variants of influence function methods. Building on this analysis, we propose a lightweight procedure for selecting the regularization value without model retraining, and validate its effectiveness across a range of standard data attribution benchmarks. Overall, our study identifies a fundamental yet overlooked challenge in the practical application of data attribution, and highlights the importance of careful discussion on hyperparameter selection in future method development.
Problem

Research questions and friction points this paper is trying to address.

Hyperparameter sensitivity in data attribution methods
Costly retraining for evaluating attribution performance
Need efficient hyperparameter tuning strategies
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large-scale study on hyperparameter sensitivity in data attribution
Theoretical analysis of regularization in influence function methods
Lightweight procedure for hyperparameter selection without retraining
🔎 Similar Papers
No similar papers found.
W
Weiyi Wang
University of Michigan Ann Arbor
Junwei Deng
Junwei Deng
University of Illinois Urbana-Champaign
Data-centric AI
Y
Yuzheng Hu
University of Illinois Urbana-Champaign
S
Shiyuan Zhang
University of Illinois Urbana-Champaign
X
Xirui Jiang
University of Michigan Ann Arbor
R
Runting Zhang
University of Michigan Ann Arbor
H
Han Zhao
University of Illinois Urbana-Champaign
Jiaqi W. Ma
Jiaqi W. Ma
Assistant Professor, University of Illinois Urbana-Champaign
Data-Centric AIData AttributionTraining Data Curation