🤖 AI Summary
To address the high computational complexity of Shapley value estimation—rendering real-time assessment of training data contributions infeasible for large-scale models—this paper proposes a one-time-trained neural interpreter framework. Methodologically, it introduces (1) the first reusable Shapley interpreter based on weighted least-squares representation, enabling instantaneous inference of Shapley values for arbitrary test samples; (2) three theoretically grounded acceleration strategies: utility function approximation, data grouping, and optimized Monte Carlo sampling; and (3) a unified modeling paradigm integrating neural fitting, weighted regression, and Shapley approximation. Empirically, on image datasets, the interpreter achieves over 100× faster training and improves Shapley value computation efficiency by more than 2.5× compared to state-of-the-art baselines, demonstrating significant gains in both scalability and accuracy.
📝 Abstract
The value and copyright of training data are crucial in the artificial intelligence industry. Service platforms should protect data providers' legitimate rights and fairly reward them for their contributions. Shapley value, a potent tool for evaluating contributions, outperforms other methods in theory, but its computational overhead escalates exponentially with the number of data providers. Recent works based on Shapley values attempt to mitigate computation complexity by approximation algorithms. However, they need to retrain for each test sample, leading to intolerable costs. We propose Fast-DataShapley, a one-pass training method that leverages the weighted least squares characterization of the Shapley value to train a reusable explainer model with real-time reasoning speed. Given new test samples, no retraining is required to calculate the Shapley values of the training data. Additionally, we propose three methods with theoretical guarantees to reduce training overhead from two aspects: the approximate calculation of the utility function and the group calculation of the training data. We analyze time complexity to show the efficiency of our methods. The experimental evaluations on various image datasets demonstrate superior performance and efficiency compared to baselines. Specifically, the performance is improved to more than 2.5 times, and the explainer's training speed can be increased by two orders of magnitude.