It's My Data Too: Private ML for Datasets with Multi-User Training Examples

๐Ÿ“… 2025-03-05
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses user-level differential privacy (DP) in machine learning under multi-ownership settings, where a single training sample may be associated with multiple usersโ€”rendering conventional user-level DP definitions and contribution constraints inadequate. We first formalize a novel user-level DP definition tailored to multi-ownership structures. Second, we identify an inherent bias-variance trade-off embedded in contribution constraints. Third, we propose and theoretically analyze a greedy subset selection algorithm that maximizes effective sample utilization while preserving the prescribed privacy budget. Experiments on synthetic logistic regression and Transformer training demonstrate that our method significantly outperforms baselines, achieving higher model accuracy under identical privacy budgets. Furthermore, we systematically quantify the joint impact of different constraint strategies on privacy protection strength and generalization performance. The framework is scalable, verifiable, and directly applicable to real-world multi-ownership scenarios.

Technology Category

Application Category

๐Ÿ“ Abstract
We initiate a study of algorithms for model training with user-level differential privacy (DP), where each example may be attributed to multiple users, which we call the multi-attribution model. We first provide a carefully chosen definition of user-level DP under the multi-attribution model. Training in the multi-attribution model is facilitated by solving the contribution bounding problem, i.e. the problem of selecting a subset of the dataset for which each user is associated with a limited number of examples. We propose a greedy baseline algorithm for the contribution bounding problem. We then empirically study this algorithm for a synthetic logistic regression task and a transformer training task, including studying variants of this baseline algorithm that optimize the subset chosen using different techniques and criteria. We find that the baseline algorithm remains competitive with its variants in most settings, and build a better understanding of the practical importance of a bias-variance tradeoff inherent in solutions to the contribution bounding problem.
Problem

Research questions and friction points this paper is trying to address.

Defines user-level differential privacy for multi-attribution datasets.
Solves the contribution bounding problem for multi-user training examples.
Evaluates greedy algorithms for dataset subset selection in ML tasks.
Innovation

Methods, ideas, or system contributions that make the work stand out.

User-level differential privacy in multi-attribution datasets
Greedy algorithm for contribution bounding problem
Empirical study on logistic regression and transformer tasks
๐Ÿ”Ž Similar Papers
No similar papers found.
Arun Ganesh
Arun Ganesh
Research Scientist, Google
differential privacy
Ryan McKenna
Ryan McKenna
Research Scientist, Google
Differential PrivacyGraphical ModelsMachine LearningNumerical OptimizationFederated Analytics
B
Brendan McMahan
Google Research
A
Adam Smith
Boston University and Google DeepMind
F
Fan Wu
University of Illinois Urbana-Champaign