๐ค AI Summary
This work addresses the individualized privacyโutility trade-off in importance sampling under differential privacy (DP), revealing an intrinsic tension: privacy gain increases with model utility yet decreases with sample size. We establish, for the first time, a theoretical characterization of individualized privacy amplification effects and propose a dual-paradigm sampling framework jointly optimizing privacy and efficiency, driven by coreset construction. Specifically, we design two privacy-aware importance sampling distribution construction methods that integrate DP analysis, privacy amplification techniques, and k-means clustering optimization. Experiments across multiple benchmark datasets demonstrate that our approach significantly improves privacy budget utilization and convergence speed compared to uniform sampling, while achieving higher clustering accuracy.
๐ Abstract
For scalable machine learning on large data sets, subsampling a representative subset is a common approach for efficient model training. This is often achieved through importance sampling, whereby informative data points are sampled more frequently. In this paper, we examine the privacy properties of importance sampling, focusing on an individualized privacy analysis. We find that, in importance sampling, privacy is well aligned with utility but at odds with sample size. Based on this insight, we propose two approaches for constructing sampling distributions: one that optimizes the privacy-efficiency trade-off; and one based on a utility guarantee in the form of coresets. We evaluate both approaches empirically in terms of privacy, efficiency, and accuracy on the differentially private $k$-means problem. We observe that both approaches yield similar outcomes and consistently outperform uniform sampling across a wide range of data sets. Our code is available on GitHub: https://github.com/smair/personalized-privacy-amplification-via-importance-sampling