🤖 AI Summary
This paper addresses performance degradation in cascading-bandit online learning-to-rank caused by adversarial corruption (e.g., click fraud). We first formalize the “Cascading Bandits with Adaptive Corruption” (CBAC) framework, modeling feedback perturbed by an adaptive adversary. To ensure robustness of recommender systems under malicious attacks, we propose two novel algorithms: one for known corruption level and another for unknown level—both built upon robust UCB confidence bounds, adaptive corruption detection, and decoupled modeling of cascading feedback. Theoretically, both algorithms achieve optimal regret bounds: logarithmic in the absence of corruption and linear in the corruption level otherwise. Empirical evaluation demonstrates that, across multiple corruption intensities, our methods improve robustness by over 40% compared to standard cascading bandits, significantly enhancing the stability and reliability of online recommendation systems.
📝 Abstract
Online learning to rank sequentially recommends a small list of items to users from a large candidate set and receives the users' click feedback. In many real-world scenarios, users browse the recommended list in order and click the first attractive item without checking the rest. Such behaviors are usually formulated as the cascade model. Many recent works study algorithms for cascading bandits, an online learning to rank framework in the cascade model. However, the performance of existing methods may drop significantly if part of the user feedback is adversarially corrupted (e.g., click fraud). In this work, we study how to resist adversarial corruptions in cascading bandits. We first formulate the `` extit{Cascading Bandits with Adversarial Corruptions}"(CBAC) problem, which assumes that there is an adaptive adversary that may manipulate the user feedback. Then we propose two robust algorithms for this problem, which assume the corruption level is known and agnostic, respectively. We show that both algorithms can achieve logarithmic regret when the algorithm is not under attack, and the regret increases linearly with the corruption level. The experimental results also verify the robustness of our methods.