๐ค AI Summary
This paper addresses dynamic parameter optimization in metric action spaces against non-Lipschitz adversariesโa class of adversarial multi-armed bandit problems with large yet structured configuration spaces. To this end, we propose ABoB (Bandit over Bandits), a hierarchical adversarial bandit framework that jointly integrates metric-space clustering of actions, hierarchical policy selection, and online environment-aware adaptation. Theoretically, ABoB retains the standard worst-case regret bound of $O(sqrt{kT})$ while leveraging local Lipschitz structure to achieve an improved bound of $O(k^{1/4}sqrt{T})$. Empirically, ABoB reduces regret by up to 50% compared to flat-bandit baselines in both synthetic and real-world storage system experiments, significantly accelerating convergence. Moreover, it unifies treatment of both stochastic and fully adversarial environments, demonstrating robustness across diverse uncertainty models.
๐ Abstract
Motivated by dynamic parameter optimization in finite, but large action (configurations) spaces, this work studies the nonstochastic multi-armed bandit (MAB) problem in metric action spaces with oblivious Lipschitz adversaries. We propose ABoB, a hierarchical Adversarial Bandit over Bandits algorithm that can use state-of-the-art existing"flat"algorithms, but additionally clusters similar configurations to exploit local structures and adapt to changing environments. We prove that in the worst-case scenario, such clustering approach cannot hurt too much and ABoB guarantees a standard worst-case regret bound of $Oleft(k^{frac{1}{2}}T^{frac{1}{2}}
ight)$, where $T$ is the number of rounds and $k$ is the number of arms, matching the traditional flat approach. However, under favorable conditions related to the algorithm properties, clusters properties, and certain Lipschitz conditions, the regret bound can be improved to $Oleft(k^{frac{1}{4}}T^{frac{1}{2}}
ight)$. Simulations and experiments on a real storage system demonstrate that ABoB, using standard algorithms like EXP3 and Tsallis-INF, achieves lower regret and faster convergence than the flat method, up to 50% improvement in known previous setups, nonstochastic and stochastic, as well as in our settings.