๐ค AI Summary
This work addresses user-level differential privacy (DP) in machine learning under multi-ownership settings, where a single training sample may be associated with multiple usersโrendering conventional user-level DP definitions and contribution constraints inadequate. We first formalize a novel user-level DP definition tailored to multi-ownership structures. Second, we identify an inherent bias-variance trade-off embedded in contribution constraints. Third, we propose and theoretically analyze a greedy subset selection algorithm that maximizes effective sample utilization while preserving the prescribed privacy budget. Experiments on synthetic logistic regression and Transformer training demonstrate that our method significantly outperforms baselines, achieving higher model accuracy under identical privacy budgets. Furthermore, we systematically quantify the joint impact of different constraint strategies on privacy protection strength and generalization performance. The framework is scalable, verifiable, and directly applicable to real-world multi-ownership scenarios.
๐ Abstract
We initiate a study of algorithms for model training with user-level differential privacy (DP), where each example may be attributed to multiple users, which we call the multi-attribution model. We first provide a carefully chosen definition of user-level DP under the multi-attribution model. Training in the multi-attribution model is facilitated by solving the contribution bounding problem, i.e. the problem of selecting a subset of the dataset for which each user is associated with a limited number of examples. We propose a greedy baseline algorithm for the contribution bounding problem. We then empirically study this algorithm for a synthetic logistic regression task and a transformer training task, including studying variants of this baseline algorithm that optimize the subset chosen using different techniques and criteria. We find that the baseline algorithm remains competitive with its variants in most settings, and build a better understanding of the practical importance of a bias-variance tradeoff inherent in solutions to the contribution bounding problem.