🤖 AI Summary
Preference-driven search in multi-objective black-box optimization remains challenging due to the scarcity and ordinal nature of human feedback.
Method: This paper proposes an interactive Bayesian optimization framework that integrates strict monotonicity priors into a neural ensemble architecture, yielding a monotonic utility surrogate model explicitly designed for pairwise comparisons. The model jointly learns preferences and quantifies uncertainty, ensuring decision consistency by construction.
Contribution/Results: To our knowledge, this is the first approach to embed strict monotonicity directly into a neural ensemble for ordinal preference learning. It achieves state-of-the-art performance across multiple benchmark problems, significantly outperforming existing methods—particularly under high utility evaluation noise. Ablation studies confirm that monotonicity modeling contributes over 35% to overall performance gains, demonstrating its critical role in robust, sample-efficient preference optimization.
📝 Abstract
Many real-world black-box optimization problems have multiple conflicting objectives. Rather than attempting to approximate the entire set of Pareto-optimal solutions, interactive preference learning allows to focus the search on the most relevant subset. However, few previous studies have exploited the fact that utility functions are usually monotonic. In this paper, we address the Bayesian Optimization with Preference Exploration (BOPE) problem and propose using a neural network ensemble as a utility surrogate model. This approach naturally integrates monotonicity and supports pairwise comparison data. Our experiments demonstrate that the proposed method outperforms state-of-the-art approaches and exhibits robustness to noise in utility evaluations. An ablation study highlights the critical role of monotonicity in enhancing performance.