When Exploration Comes for Free with Mixture-Greedy: Do we need UCB in Diversity-Aware Multi-Armed Bandits?

📅 2026-03-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the costly suboptimal sampling inherent in diversity-aware generative model selection by proposing a Mixture-Greedy strategy that dispenses with conventional explicit exploration mechanisms based on upper confidence bounds (UCB). Instead, it leverages the intrinsic structure of diversity-oriented objective functions—such as FID and Vendi scores—to enable implicit exploration, provably ensuring effective exploration without optimistic bonuses under certain conditions. Theoretical analysis establishes a sublinear regret bound for the proposed approach. Empirical evaluations across multiple datasets demonstrate that Mixture-Greedy converges faster and achieves higher sample efficiency than UCB-based methods, while simultaneously yielding superior diversity and generation quality. These findings challenge the prevailing paradigm in multi-armed bandits that explicit exploration is indispensable.

Technology Category

Application Category

📝 Abstract
Efficient selection among multiple generative models is increasingly important in modern generative AI, where sampling from suboptimal models is costly. This problem can be formulated as a multi-armed bandit task. Under diversity-aware evaluation metrics, a non-degenerate mixture of generators can outperform any individual model, distinguishing this setting from classical best-arm identification. Prior approaches therefore incorporate an Upper Confidence Bound (UCB) exploration bonus into the mixture objective. However, across multiple datasets and evaluation metrics, we observe that the UCB term consistently slows convergence and often reduces sample efficiency. In contrast, a simple \emph{Mixture-Greedy} strategy without explicit UCB-type optimism converges faster and achieves even better performance, particularly for widely used metrics such as FID and Vendi where tight confidence bounds are difficult to construct. We provide theoretical insight explaining this behavior: under transparent structural conditions, diversity-aware objectives induce implicit exploration by favoring interior mixtures, leading to linear sampling of all arms and sublinear regret guarantees for entropy-based, kernel-based, and FID-type objectives. These results suggest that in diversity-aware multi-armed bandits for generative model selection, exploration can arise intrinsically from the objective geometry, questioning the necessity of explicit confidence bonuses.
Problem

Research questions and friction points this paper is trying to address.

multi-armed bandits
generative model selection
diversity-aware evaluation
exploration-exploitation
UCB
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mixture-Greedy
diversity-aware bandits
implicit exploration
generative model selection
sublinear regret
🔎 Similar Papers
No similar papers found.
B
Bahar Dibaei Nia
Department of Computer Science and Engineering, Chinese University of Hong Kong
Farzan Farnia
Farzan Farnia
Assistant Professor, Chinese University of Hong Kong
Machine LearningOptimizationInformation Theory