BONSAI: Bayesian Optimization with Natural Simplicity and Interpretability

📅 2026-02-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Standard Bayesian optimization often deviates excessively from default hyperparameter configurations during tuning, struggling to distinguish between critical and redundant parameter changes, thereby increasing validation costs. This work proposes a default-aware Bayesian optimization method that explicitly incorporates a preference for default values into the acquisition function for the first time. By introducing a bias-pruning mechanism, the approach actively suppresses adjustments to parameters with negligible impact on performance. Built within a Gaussian process framework, the method is compatible with mainstream acquisition functions such as Expected Improvement (EI) and GP-UCB, and comes with theoretical regret bounds. Empirical results across multiple real-world tasks demonstrate that the proposed method significantly reduces the number of non-default parameters while matching the performance of standard Bayesian optimization, all with negligible additional computational overhead.

Technology Category

Application Category

📝 Abstract
Bayesian optimization (BO) is a popular technique for sample-efficient optimization of black-box functions. In many applications, the parameters being tuned come with a carefully engineered default configuration, and practitioners only want to deviate from this default when necessary. Standard BO, however, does not aim to minimize deviation from the default and, in practice, often pushes weakly relevant parameters to the boundary of the search space. This makes it difficult to distinguish between important and spurious changes and increases the burden of vetting recommendations when the optimization objective omits relevant operational considerations. We introduce BONSAI, a default-aware BO policy that prunes low-impact deviations from a default configuration while explicitly controlling the loss in acquisition value. BONSAI is compatible with a variety of acquisition functions, including expected improvement and upper confidence bound (GP-UCB). We theoretically bound the regret incurred by BONSAI, showing that, under certain conditions, it enjoys the same no-regret property as vanilla GP-UCB. Across many real-world applications, we empirically find that BONSAI substantially reduces the number of non-default parameters in recommended configurations while maintaining competitive optimization performance, with little effect on wall time.
Problem

Research questions and friction points this paper is trying to address.

Bayesian optimization
default configuration
parameter deviation
black-box optimization
interpretability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bayesian Optimization
Default-aware Optimization
Interpretability
Parameter Pruning
Acquisition Function
🔎 Similar Papers
No similar papers found.