🤖 AI Summary
This study addresses the stochastic dueling bandits problem in continuous action spaces endowed with a Lipschitz structure, where feedback is limited to pairwise comparisons. The work proposes the first algorithm tailored to this setting, integrating round-based exploration, recursive region elimination, and adaptive reference arm selection. To characterize the complexity of near-optimal regions, the authors introduce the notion of zooming dimension. This paper pioneers the integration of Lipschitz bandits with dueling bandits, yielding an efficient strategy that requires only logarithmic space complexity and establishing a novel analytical framework for relative feedback. Theoretically, the algorithm achieves a cumulative regret upper bound of $\tilde{O}(T^{(d_z+1)/(d_z+2)})$, where $d_z$ denotes the zooming dimension.
📝 Abstract
We study for the first time, stochastic dueling bandits over continuous action spaces with Lipschitz structure, where feedback is purely comparative. While dueling bandits and Lipschitz bandits have been studied separately, their combination has remained unexplored. We propose the first algorithm for Lipschitz dueling bandits, using round-based exploration and recursive region elimination guided by an adaptive reference arm. We develop new analytical tools for relative feedback and prove a regret bound of $\tilde O\left(T^{\frac{d_z+1}{d_z+2}}\right)$, where $d_z$ is the zooming dimension of the near-optimal region. Further, our algorithm takes only logarithmic space in terms of the total time horizon, best achievable by any bandit algorithm over a continuous action space.