🤖 AI Summary
This work investigates the impact of infinitesimal perturbations to training data on model performance, aiming to enhance model interpretability and robustness. Addressing the high computational cost and limited interpretability of conventional influence functions—particularly in non-convex models—the paper pioneers the integration of Fisher information geometry into influence estimation. It introduces the Approximate Fisher Influence Function (AFIF), reformulating influence estimation as a weighted empirical risk minimization problem. Leveraging information-geometric principles, the authors derive an efficient approximation algorithm that avoids explicit Hessian inversion. The method achieves several-fold speedup over Newton-type approaches while maintaining high accuracy and strong robustness across both generalized linear models and non-convex neural networks. By unifying geometric insight with practical scalability, AFIF bridges interpretability and usability in influence analysis.
📝 Abstract
Quantifying the influence of infinitesimal changes in training data on model performance is crucial for understanding and improving machine learning models. In this work, we reformulate this problem as a weighted empirical risk minimization and enhance existing influence function-based methods by using information geometry to derive a new algorithm to estimate influence. Our formulation proves versatile across various applications, and we further demonstrate in simulations how it remains informative even in non-convex cases. Furthermore, we show that our method offers significant computational advantages over current Newton step-based methods.