Fusing Reward and Dueling Feedback in Stochastic Bandits

📅 2025-04-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper studies collaborative learning in stochastic multi-armed bandits where both absolute reward feedback and relative pairwise comparison feedback are simultaneously available, aiming to minimize cumulative regret. First, it establishes the theoretical lower bound on regret under this hybrid feedback setting. Building upon this, it proposes two fusion mechanisms: elimination-based and decomposition-based. The latter achieves a minimax-optimal upper bound matching the lower bound by decoupling feedback sources, dynamically allocating exploration budget, and applying arm elimination. Under standard sub-Gaussian assumptions, the decomposition-based algorithm attains an $O(sqrt{KT})$ regret bound—where $K$ denotes the number of arms and $T$ the time horizon—without requiring additional regularity conditions (e.g., strong stochastic transitivity). Empirical evaluations demonstrate that it significantly outperforms baselines using only one feedback type, validating both the theoretical soundness and practical efficacy of hybrid feedback modeling.

Technology Category

Application Category

📝 Abstract
This paper investigates the fusion of absolute (reward) and relative (dueling) feedback in stochastic bandits, where both feedback types are gathered in each decision round. We derive a regret lower bound, demonstrating that an efficient algorithm may incur only the smaller among the reward and dueling-based regret for each individual arm. We propose two fusion approaches: (1) a simple elimination fusion algorithm that leverages both feedback types to explore all arms and unifies collected information by sharing a common candidate arm set, and (2) a decomposition fusion algorithm that selects the more effective feedback to explore the corresponding arms and randomly assigns one feedback type for exploration and the other for exploitation in each round. The elimination fusion experiences a suboptimal multiplicative term of the number of arms in regret due to the intrinsic suboptimality of dueling elimination. In contrast, the decomposition fusion achieves regret matching the lower bound up to a constant under a common assumption. Extensive experiments confirm the efficacy of our algorithms and theoretical results.
Problem

Research questions and friction points this paper is trying to address.

Combining reward and dueling feedback in stochastic bandits
Minimizing regret by leveraging both feedback types efficiently
Proposing algorithms to optimize feedback fusion and reduce regret
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fuses reward and dueling feedback in bandits
Elimination algorithm shares common arm set
Decomposition algorithm matches regret lower bound
🔎 Similar Papers
No similar papers found.