🤖 AI Summary
This work addresses the challenge of dynamically allocating privacy budgets across multiple rounds of adaptive queries in settings where each user contributes multiple data components, under the framework of generalized differential privacy. To optimize the utility of subsequent queries while preserving user privacy, we propose the first adaptive budget allocation framework that adjusts privacy budgets based on the outputs of prior queries. By integrating privacy amplification techniques, our approach significantly enhances query accuracy without compromising privacy guarantees. Empirical evaluations demonstrate that the proposed method effectively conserves privacy budget across diverse applications, achieving a superior trade-off between privacy and utility.
📝 Abstract
We study the problem of adaptive privacy budgeting under generalized differential privacy. Consider the setting where each user $i\in [n]$ holds a tuple $x_i\in U:=U_1\times \dotsb \times U_T$, where $x_i(l)\in U_l$ represents the $l$-th component of their data. For every $l\in [T]$ (or a subset), an untrusted analyst wishes to compute some $f_l(x_1(l),\dots,x_n(l))$, while respecting the privacy of each user. For many functions $f_l$, data from the users are not all equally important, and there is potential to use the privacy budgets of the users strategically, leading to privacy savings that can be used to improve the utility of later queries. In particular, the budgeting should be adaptive to the outputs of previous queries, so that greater savings can be achieved on more typical instances. In this paper, we provide such an adaptive budgeting framework, with various applications demonstrating its applicability.