Scalable Differentially Private Bayesian Optimization

📅 2025-02-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Differential privacy (DP) in high-dimensional Bayesian optimization (BO) remains impractical—especially for large neural network hyperparameter tuning—due to inherent scalability limitations; existing methods fail to simultaneously ensure rigorous privacy guarantees and optimization efficiency. Method: This work introduces gradient information into the DP-BO framework for the first time, proposing a gradient-guided high-dimensional DP mechanism that integrates adaptive noise injection with private gradient estimation—without requiring strong modeling assumptions. Contribution/Results: We theoretically establish exponential convergence to a neighborhood of the global optimum under strict (ε,δ)-DP. Empirically, our method significantly outperforms state-of-the-art private optimization algorithms across multiple high-dimensional hyperparameter tuning tasks, achieving superior privacy protection (certified by tight (ε,δ)-bounds), faster convergence, and higher probability of identifying high-quality optima.

Technology Category

Application Category

📝 Abstract
In recent years, there has been much work on scaling Bayesian Optimization to high-dimensional problems, for example hyperparameter tuning in large neural network models. These scalable methods have been successful, finding high objective values much more quickly than traditional global Bayesian Optimization or random search-based methods. At the same time, these large neural network models often use sensitive data, but preservation of Differential Privacy has not scaled alongside these modern Bayesian Optimization procedures. Here we develop a method to privately estimate potentially high-dimensional parameter spaces using Gradient Informative Bayesian Optimization. Our theoretical results prove that under suitable conditions, our method converges exponentially fast to a ball around the optimal parameter configuration. Moreover, regardless of whether the assumptions are satisfied, we show that our algorithm maintains privacy and empirically demonstrates superior performance to existing methods in the high-dimensional hyperparameter setting.
Problem

Research questions and friction points this paper is trying to address.

Scalable Differentially Private Bayesian Optimization
High-dimensional hyperparameter tuning
Privacy-preserving parameter estimation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Scalable Bayesian Optimization
Differential Privacy preservation
Gradient Informative Bayesian Optimization
🔎 Similar Papers
No similar papers found.
G
Getoar Sopa
Department of Statistics, Columbia University, New York, NY
Juraj Marusic
Juraj Marusic
Columbia University
StatisticsMachine Learning
M
Marco Avella-Medina
Department of Statistics, Columbia University, New York, NY
J
John P. Cunningham
Department of Statistics, Columbia University, New York, NY