Fast-DataShapley: Neural Modeling for Training Data Valuation

📅 2025-06-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high computational complexity of Shapley value estimation—rendering real-time assessment of training data contributions infeasible for large-scale models—this paper proposes a one-time-trained neural interpreter framework. Methodologically, it introduces (1) the first reusable Shapley interpreter based on weighted least-squares representation, enabling instantaneous inference of Shapley values for arbitrary test samples; (2) three theoretically grounded acceleration strategies: utility function approximation, data grouping, and optimized Monte Carlo sampling; and (3) a unified modeling paradigm integrating neural fitting, weighted regression, and Shapley approximation. Empirically, on image datasets, the interpreter achieves over 100× faster training and improves Shapley value computation efficiency by more than 2.5× compared to state-of-the-art baselines, demonstrating significant gains in both scalability and accuracy.

Technology Category

Application Category

📝 Abstract
The value and copyright of training data are crucial in the artificial intelligence industry. Service platforms should protect data providers' legitimate rights and fairly reward them for their contributions. Shapley value, a potent tool for evaluating contributions, outperforms other methods in theory, but its computational overhead escalates exponentially with the number of data providers. Recent works based on Shapley values attempt to mitigate computation complexity by approximation algorithms. However, they need to retrain for each test sample, leading to intolerable costs. We propose Fast-DataShapley, a one-pass training method that leverages the weighted least squares characterization of the Shapley value to train a reusable explainer model with real-time reasoning speed. Given new test samples, no retraining is required to calculate the Shapley values of the training data. Additionally, we propose three methods with theoretical guarantees to reduce training overhead from two aspects: the approximate calculation of the utility function and the group calculation of the training data. We analyze time complexity to show the efficiency of our methods. The experimental evaluations on various image datasets demonstrate superior performance and efficiency compared to baselines. Specifically, the performance is improved to more than 2.5 times, and the explainer's training speed can be increased by two orders of magnitude.
Problem

Research questions and friction points this paper is trying to address.

Efficiently compute Shapley values for training data valuation
Avoid retraining for each test sample to reduce costs
Improve performance and speed of data contribution evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

One-pass training with weighted least squares
Reusable explainer model for real-time reasoning
Group calculation reduces training overhead
🔎 Similar Papers
No similar papers found.
Haifeng Sun
Haifeng Sun
Associate Professor of Computer Science, Beijing University of Posts and Telecommunications
Natural language Processingintent based networkingNetAI
Y
Yu Xiong
Fuxi AI Lab, NetEase Inc., Hangzhou, China
Runze Wu
Runze Wu
Fuxi AI Lab, NetEase Games | University of Science and Technology of China
Data MiningMachine LearningOnline Games
Xinyu Cai
Xinyu Cai
Shanghai Artificial Intelligence Laboratory
Artificial IntelligenceAutonomous Driving
C
Changjie Fan
Fuxi AI Lab, NetEase Inc., Hangzhou, China
L
Lan Zhang
University of Science and Technology of China, Hefei, China
X
Xiang-Yang Li
University of Science and Technology of China, Hefei, China