🤖 AI Summary
This work addresses the challenge of unifying the quantification of individual data points’ influence on model loss distributions, thereby bridging the theoretical gap between data attribution and membership inference attacks (MIA). To this end, we propose WaKA—a novel method leveraging the Wasserstein distance and k-nearest neighbors (k-NN) classifier to efficiently estimate per-sample attribution scores without subset sampling. WaKA innovatively integrates LiRA’s privacy principles with k-NN structure, establishing the first unified framework for jointly modeling data value and privacy risk, supporting both a priori and a posteriori applications. Experiments demonstrate that WaKA achieves MIA performance on par with LiRA on real-world datasets while significantly reducing computational overhead. Moreover, in data minimization tasks under class imbalance, WaKA exhibits superior robustness compared to baseline methods including Shapley values.
📝 Abstract
In this paper, we introduce WaKA (Wasserstein K-nearest neighbors Attribution), a novel attribution method that leverages principles from the LiRA (Likelihood Ratio Attack) framework and k-nearest neighbors classifiers (k-NN). WaKA efficiently measures the contribution of individual data points to the model’s loss distribution, analyzing every possible k-NN that can be constructed using the training set, without requiring to sample subsets of the training set. WaKA is versatile and can be used a posteriori as a membership inference attack (MIA) to assess privacy risks or a priori for privacy influence measurement and data valuation. Thus, WaKA can be seen as bridging the gap between data attribution and membership inference attack (MIA) by providing a unified framework to distinguish between a data point’s value and its privacy risk. For instance, we have shown that self-attribution values are more strongly correlated with the attack success rate than the contribution of a point to the model generalization. WaKA’s different usages were also evaluated across diverse real-world datasets, demonstrating performance very close to LiRA when used as an MIA on k-NN classifiers, but with greater computational efficiency. Additionally, WaKA shows greater robustness than Shapley Values for data minimization tasks (removal or addition) on imbalanced datasets.