🤖 AI Summary
This paper studies user-level differentially private union selection: given a massive number of users, each holding a subset of an infinite universe, the goal is to output as many elements as possible from their union while satisfying user-level differential privacy—enabling applications such as private vocabulary construction, categorical statistics, and embedding learning. To this end, we propose MaximumAdaptiveDegree (MAD), the first algorithm featuring adaptive weight reallocation to significantly increase retention probability for low-frequency items. MAD employs a two-round iterative framework wherein weights are dynamically refined in the second round based on first-round outputs, achieving stochastic dominance. It integrates parallelizable adaptive measurement weighting, threshold clipping, and feedback-driven reweighting, enabling scalable distributed deployment. Experiments demonstrate that MAD scales to datasets with 10¹¹–10¹⁴ elements—three orders of magnitude larger than prior sequential methods—and achieves the highest output cardinality among all parallel approaches, with strict stochastic dominance over standard baselines.
📝 Abstract
In the differentially private partition selection problem (a.k.a. private set union, private key discovery), users hold subsets of items from an unbounded universe. The goal is to output as many items as possible from the union of the users' sets while maintaining user-level differential privacy. Solutions to this problem are a core building block for many privacy-preserving ML applications including vocabulary extraction in a private corpus, computing statistics over categorical data, and learning embeddings over user-provided items. We propose an algorithm for this problem, MaximumAdaptiveDegree (MAD), which adaptively reroutes weight from items with weight far above the threshold needed for privacy to items with smaller weight, thereby increasing the probability that less frequent items are output. Our algorithm can be efficiently implemented in massively parallel computation systems allowing scalability to very large datasets. We prove that our algorithm stochastically dominates the standard parallel algorithm for this problem. We also develop a two-round version of our algorithm where results of the computation in the first round are used to bias the weighting in the second round to maximize the number of items output. In experiments, our algorithms provide the best results across the board among parallel algorithms and scale to datasets with hundreds of billions of items, up to three orders of magnitude larger than those analyzed by prior sequential algorithms.