Hyperparameter Optimization via Interacting with Probabilistic Circuits

📅 2025-05-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address challenges in interactive hyperparameter optimization (HPO)—including inaccurate modeling of user priors, high computational cost of inner-loop acquisition function optimization, and distortion in human feedback—this paper proposes the first acquisition-function-free interactive Bayesian optimization framework based on probabilistic circuits (PCs). Leveraging PCs’ interpretability, differentiable conditional inference, and efficient exact sampling, our method directly generates candidate configurations conditioned on user feedback, thereby avoiding belief distortion induced by weighted acquisition functions. The approach natively supports mixed hyperparameter spaces and enables end-to-end exact embedding and dynamic updating of user beliefs. Empirically, it achieves state-of-the-art performance on standard HPO benchmarks and significantly outperforms existing Bayesian optimization baselines on interactive HPO benchmarks.

Technology Category

Application Category

📝 Abstract
Despite the growing interest in designing truly interactive hyperparameter optimization (HPO) methods, to date, only a few allow to include human feedback. Existing interactive Bayesian optimization (BO) methods incorporate human beliefs by weighting the acquisition function with a user-defined prior distribution. However, in light of the non-trivial inner optimization of the acquisition function prevalent in BO, such weighting schemes do not always accurately reflect given user beliefs. We introduce a novel BO approach leveraging tractable probabilistic models named probabilistic circuits (PCs) as a surrogate model. PCs encode a tractable joint distribution over the hybrid hyperparameter space and evaluation scores. They enable exact conditional inference and sampling. Based on conditional sampling, we construct a novel selection policy that enables an acquisition function-free generation of candidate points (thereby eliminating the need for an additional inner-loop optimization) and ensures that user beliefs are reflected accurately in the selection policy. We provide a theoretical analysis and an extensive empirical evaluation, demonstrating that our method achieves state-of-the-art performance in standard HPO and outperforms interactive BO baselines in interactive HPO.
Problem

Research questions and friction points this paper is trying to address.

Incorporating human feedback in hyperparameter optimization accurately
Eliminating inner-loop optimization in Bayesian optimization methods
Leveraging probabilistic circuits for tractable hyperparameter space modeling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses probabilistic circuits as surrogate models
Enables exact conditional inference and sampling
Eliminates inner-loop optimization via conditional sampling
🔎 Similar Papers
No similar papers found.