Revisiting Hyperparameter Tuning with Differential Privacy

📅 2022-11-03
🏛️ arXiv.org
📈 Citations: 5
Influential: 0
📄 PDF
🤖 AI Summary
To address the rapid exhaustion of privacy budgets during hyperparameter tuning under differential privacy, this paper proposes a novel tuning framework whose privacy loss is independent of the number of candidates and grows only with utility gain. It establishes, for the first time in hyperparameter search, an explicit trade-off between privacy loss and utility gain, theoretically proving that the additional privacy loss is bounded by the square root of the utility gain—empirically scaling as the logarithmic square root. The method integrates Rényi differential privacy, adaptive grid search, and a doubling-step mechanism, enabling full-space exploration while incurring significantly lower privacy overhead than baseline approaches. Extensive experiments on multiple benchmark datasets demonstrate that the framework achieves superior model utility–privacy trade-offs under stringent privacy constraints.
📝 Abstract
Hyperparameter tuning is a common practice in the application of machine learning but is a typically ignored aspect in the literature on privacy-preserving machine learning due to its negative effect on the overall privacy parameter. In this paper, we aim to tackle this fundamental yet challenging problem by providing an effective hyperparameter tuning framework with differential privacy. The proposed method allows us to adopt a broader hyperparameter search space and even to perform a grid search over the whole space, since its privacy loss parameter is independent of the number of hyperparameter candidates. Interestingly, it instead correlates with the utility gained from hyperparameter searching, revealing an explicit and mandatory trade-off between privacy and utility. Theoretically, we show that its additional privacy loss bound incurred by hyperparameter tuning is upper-bounded by the squared root of the gained utility. However, we note that the additional privacy loss bound would empirically scale like a squared root of the logarithm of the utility term, benefiting from the design of doubling step.
Problem

Research questions and friction points this paper is trying to address.

Addressing hyperparameter tuning in privacy-preserving machine learning
Providing a differential privacy framework for hyperparameter search
Balancing privacy loss and utility in hyperparameter optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Differential privacy for hyperparameter tuning
Privacy loss independent of candidate count
Utility-privacy trade-off via squared root scaling
🔎 Similar Papers
No similar papers found.
Youlong Ding
Youlong Ding
Hebrew University of Jerusalem
Theoretical Computer ScienceCryptographyMachine Learning
X
Xueyang Wu
Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong SAR, China