๐ค AI Summary
Existing influence estimation methods suffer from training stochasticity, yielding inconsistent sample influence scores across different training runsโundermining reliable data curation and model maintenance. To address this, we propose the *f-influence* framework, the first to explicitly model training randomness and enable stable, efficient per-sample influence estimation via hypothesis testing on a single training run. Our approach integrates influence functions, stochastic gradient sampling, and statistical inference, culminating in the *f-INE* algorithm, which provably enhances estimation robustness under theoretical guarantees. Experiments on Llama-3.1-8B demonstrate that f-INE accurately identifies bias-inducing contaminated samples, enabling effective data cleaning and behavioral attribution. This work establishes a novel paradigm for trustworthy data governance grounded in statistically principled influence assessment.
๐ Abstract
Influence estimation methods promise to explain and debug machine learning by estimating the impact of individual samples on the final model. Yet, existing methods collapse under training randomness: the same example may appear critical in one run and irrelevant in the next. Such instability undermines their use in data curation or cleanup since it is unclear if we indeed deleted/kept the correct datapoints. To overcome this, we introduce *f-influence* -- a new influence estimation framework grounded in hypothesis testing that explicitly accounts for training randomness, and establish desirable properties that make it suitable for reliable influence estimation. We also design a highly efficient algorithm **f**-**IN**fluence **E**stimation (**f-INE**) that computes f-influence **in a single training run**. Finally, we scale up f-INE to estimate influence of instruction tuning data on Llama-3.1-8B and show it can reliably detect poisoned samples that steer model opinions, demonstrating its utility for data cleanup and attributing model behavior.