🤖 AI Summary
Existing transformations from pure differential privacy (DP) to joint output and runtime DP (JOT-DP) suffer from three fundamental limitations under the unbounded-DP setting: (i) failure probability does not decay with dataset size, (ii) severe computational overhead, and (iii) analyses relying on oversimplified coin-flip models.
Method: We establish the first formal connection between the JOT-DP error lower bound and the underlying computational model; propose an efficient transformation grounded in the random-access machine (RAM) model—supporting either constant-time random number generation or prior knowledge of dataset size.
Results: Our transformation achieves polynomially decaying failure probability under those conditions; otherwise, it guarantees arbitrarily small constant error. It preserves the original algorithm’s asymptotic time complexity and ensures the output distribution is γ-close in total variation distance to that of the source DP program. This work breaks the long-standing trade-off among error controllability, efficiency preservation, and computational realism.
📝 Abstract
Recent works have started to theoretically investigate how we can protect differentially private programs against timing attacks, by making the joint distribution the output and the runtime differentially private (JOT-DP). However, the existing approaches to JOT-DP have some limitations, particularly in the setting of unbounded DP (which protects the size of the dataset and applies to arbitrarily large datasets). First, the known conversion of pure DP programs to pure JOT-DP programs in the unbounded setting (a) incurs a constant additive increase in error probability (and thus does not provide vanishing error as $n oinfty$) (b) produces JOT-DP programs that fail to preserve the computational efficiency of the original pure DP program and (c) is analyzed in a toy computational model in which the runtime is defined to be the number of coin flips. In this work, we overcome these limitations. Specifically, we show that the error required for pure JOT-DP in the unbounded setting depends on the model of computation. In a randomized RAM model where the dataset size $n$ is given (or can be computed in constant time) and we can generate random numbers (not just random bits) in constant time, polynomially small error probability is necessary and sufficient. If $n$ is not given or we only have a random-bit generator, an (arbitrarily small) constant error probability is necessary and sufficient. The aforementioned positive results are proven by efficient procedures to convert any pure JOT-DP program $P$ in the upper-bounded setting to a pure JOT-DP program $P'$ in the unbounded setting, such that the output distribution of $P'$ is $gamma$-close in total variation distance to that of $P$, where $gamma$ is either an arbitrarily small constant or polynomially small, depending on the model of computation.