🤖 AI Summary
To address the high computational cost and low accuracy of data influence estimation in large-scale models, this paper proposes a zeroth-order approximation method that relies solely on training/test loss sequences and intermediate parameter snapshots—eliminating the need for gradient or inverse-Hessian computations. This approach overcomes key scalability limitations and inapplicability of conventional influence function methods to non-differentiable losses. By modeling the evolution trajectory of loss values, it enables efficient estimation of both self-influence (i.e., a sample’s influence on its own prediction) and cross-sample influence (i.e., a sample’s influence on predictions for other samples). Empirically, the method significantly improves performance in data quality assessment and anomaly detection: self-influence estimation error is reduced by up to 32%, and cross-sample influence estimates achieve higher correlation with ground truth than baseline methods. Moreover, it incurs only 5–10% of the computational time and memory overhead of state-of-the-art approaches.
📝 Abstract
A critical aspect of analyzing and improving modern machine learning systems lies in understanding how individual training examples influence a model's predictive behavior. Estimating this influence enables critical applications, including data selection and model debugging; in particular, self-influence, which quantifies the influence of a training point on itself, has found many uses in data quality assessment and outlier detection. Existing methods for measuring data influence, however, are often impractical for large models due to low accuracy or prohibitive computational costs: most approaches either provide poor approximations or rely on gradients and inverse-Hessian computations that remain challenging to scale. In this work, we introduce a highly efficient zeroth-order approximation for estimating the influence of training data that requires only a fraction of the time and memory footprint of prior methods. Notably, our method relies solely on loss values of intermediate checkpoints on the training and test data, along with the checkpoints themselves, making it broadly applicable even when the loss function of interest is non-differentiable. Beyond its computational efficiency, our approach achieves superior accuracy in estimating self-influence and comparable or improved accuracy in estimating train-test influence for fine-tuned large language models, enabling scalable and practical analysis of how training data shapes model behavior.