Blessings of Multiple Good Arms in Multi-Objective Linear Bandits

📅 2026-02-13
📈 Citations: 0
Influential: 0
📄 PDF

Technology Category

Application Category

📝 Abstract
The multi objective bandit setting has traditionally been regarded as more complex than the single objective case, as multiple objectives must be optimized simultaneously. In contrast to this prevailing view, we demonstrate that when multiple good arms exist for multiple objectives, they can induce a surprising benefit, implicit exploration. Under this condition, we show that simple algorithms that greedily select actions in most rounds can nonetheless achieve strong performance, both theoretically and empirically. To our knowledge, this is the first study to introduce implicit exploration in both multi objective and parametric bandit settings without any distributional assumptions on the contexts. We further introduce a framework for effective Pareto fairness, which provides a principled approach to rigorously analyzing fairness of multi objective bandit algorithms.
Problem

Research questions and friction points this paper is trying to address.

multi-objective bandits
implicit exploration
Pareto fairness
linear bandits
good arms
Innovation

Methods, ideas, or system contributions that make the work stand out.

implicit exploration
multi-objective bandits
Pareto fairness
parametric bandits
greedy algorithms
🔎 Similar Papers
No similar papers found.