🤖 AI Summary
Conventional multi-objective multi-armed bandit (MO-MAB) methods solely pursue Pareto optimality, ignoring users’ heterogeneous preferences across objectives—resulting in solution sets that fail to satisfy personalized requirements.
Method: We propose the first preference-aware MO-MAB framework, formally introducing two novel settings: “unknown preferences” and “hidden preferences.” Our adaptive algorithm integrates preference estimation, preference-weighted optimization, and a confidence-interval-driven multi-objective trade-off mechanism.
Contribution/Results: We provide rigorous theoretical guarantees, proving that the algorithm achieves a near-optimal regret bound of $ ilde{O}(sqrt{T})$. Empirical evaluations demonstrate that our method significantly outperforms existing Pareto-oriented baselines across diverse preference scenarios, enabling customized optimization within the Pareto frontier.
📝 Abstract
Multi-objective multi-armed bandit (MO-MAB) problems traditionally aim to achieve Pareto optimality. However, real-world scenarios often involve users with varying preferences across objectives, resulting in a Pareto-optimal arm that may score high for one user but perform quite poorly for another. This highlights the need for customized learning, a factor often overlooked in prior research. To address this, we study a preference-aware MO-MAB framework in the presence of explicit user preference. It shifts the focus from achieving Pareto optimality to further optimizing within the Pareto front under preference-centric customization. To our knowledge, this is the first theoretical study of customized MO-MAB optimization with explicit user preferences. Motivated by practical applications, we explore two scenarios: unknown preference and hidden preference, each presenting unique challenges for algorithm design and analysis. At the core of our algorithms are preference estimation and preference-aware optimization mechanisms to adapt to user preferences effectively. We further develop novel analytical techniques to establish near-optimal regret of the proposed algorithms. Strong empirical performance confirm the effectiveness of our approach.