Towards Robust Influence Functions with Flat Validation Minima

πŸ“… 2025-05-25
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing influence functions (IFs) yield unreliable estimates under noisy training dataβ€”not primarily due to bias in parameter-change estimation, but rather because sharp minima in the validation risk distort loss-change estimation. Method: This work establishes, for the first time, a theoretical link between IF estimation error and validation risk sharpness, proving that sharp minima systematically amplify IF bias. We propose a novel IF estimation framework targeting flat validation minima, achieved by reconstructing the influence function via validation-loss curvature regularization and integrating deep model diagnostic techniques for adaptive calibration. Contribution/Results: On multi-task benchmarks, our method significantly improves influence estimation accuracy and demonstrates markedly superior robustness to label noise compared to state-of-the-art IF approaches. It provides a new paradigm for trustworthy model debugging and data quality assessment.

Technology Category

Application Category

πŸ“ Abstract
The Influence Function (IF) is a widely used technique for assessing the impact of individual training samples on model predictions. However, existing IF methods often fail to provide reliable influence estimates in deep neural networks, particularly when applied to noisy training data. This issue does not stem from inaccuracies in parameter change estimation, which has been the primary focus of prior research, but rather from deficiencies in loss change estimation, specifically due to the sharpness of validation risk. In this work, we establish a theoretical connection between influence estimation error, validation set risk, and its sharpness, underscoring the importance of flat validation minima for accurate influence estimation. Furthermore, we introduce a novel estimation form of Influence Function specifically designed for flat validation minima. Experimental results across various tasks validate the superiority of our approach.
Problem

Research questions and friction points this paper is trying to address.

Improving reliability of Influence Function estimates in deep neural networks
Addressing deficiencies in loss change estimation due to sharp validation risk
Introducing novel Influence Function form for flat validation minima
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces flat validation minima for influence estimation
Links influence error to validation risk sharpness
Novel Influence Function form for robust estimates
πŸ”Ž Similar Papers
No similar papers found.
Xichen Ye
Xichen Ye
Fudan University
Machine Learning
Y
Yifan Wu
Fudan University
Weizhong Zhang
Weizhong Zhang
Fudan University
Machine LearningDeep LearningOptimization
C
Cheng Jin
Fudan University, Innovation Center of Calligraphy and Painting Creation Technology, MCT, China
Y
Yifan Chen
Hong Kong Baptist University