Provably Efficient Multi-Objective Bandit Algorithms under Preference-Centric Customization

📅 2025-02-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Conventional multi-objective multi-armed bandit (MO-MAB) methods solely pursue Pareto optimality, ignoring users’ heterogeneous preferences across objectives—resulting in solution sets that fail to satisfy personalized requirements. Method: We propose the first preference-aware MO-MAB framework, formally introducing two novel settings: “unknown preferences” and “hidden preferences.” Our adaptive algorithm integrates preference estimation, preference-weighted optimization, and a confidence-interval-driven multi-objective trade-off mechanism. Contribution/Results: We provide rigorous theoretical guarantees, proving that the algorithm achieves a near-optimal regret bound of $ ilde{O}(sqrt{T})$. Empirical evaluations demonstrate that our method significantly outperforms existing Pareto-oriented baselines across diverse preference scenarios, enabling customized optimization within the Pareto frontier.

Technology Category

Application Category

📝 Abstract
Multi-objective multi-armed bandit (MO-MAB) problems traditionally aim to achieve Pareto optimality. However, real-world scenarios often involve users with varying preferences across objectives, resulting in a Pareto-optimal arm that may score high for one user but perform quite poorly for another. This highlights the need for customized learning, a factor often overlooked in prior research. To address this, we study a preference-aware MO-MAB framework in the presence of explicit user preference. It shifts the focus from achieving Pareto optimality to further optimizing within the Pareto front under preference-centric customization. To our knowledge, this is the first theoretical study of customized MO-MAB optimization with explicit user preferences. Motivated by practical applications, we explore two scenarios: unknown preference and hidden preference, each presenting unique challenges for algorithm design and analysis. At the core of our algorithms are preference estimation and preference-aware optimization mechanisms to adapt to user preferences effectively. We further develop novel analytical techniques to establish near-optimal regret of the proposed algorithms. Strong empirical performance confirm the effectiveness of our approach.
Problem

Research questions and friction points this paper is trying to address.

Optimizing multi-objective bandit under user preferences
Addressing Pareto optimality with preference customization
Developing algorithms for unknown and hidden preferences
Innovation

Methods, ideas, or system contributions that make the work stand out.

Preference-aware MO-MAB framework
Preference estimation and optimization
Novel analytical techniques
🔎 Similar Papers
No similar papers found.
L
Linfeng Cao
Department of CSE, The Ohio State University
Ming Shi
Ming Shi
Assistant Professor, The State University of New York at Buffalo
Learning TheoryOnline OptimizationNetworkingSecurity
N
Ness B. Shroff
Department of ECE, The Ohio State University