Online Learning to Rank under Corruption: A Robust Cascading Bandits Approach

๐Ÿ“… 2025-11-04
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This paper addresses data corruption caused by click fraud in online learning to rank (OLTR). To mitigate malicious click interference, we propose a robust cascading bandits framework and introduce the MSUCB algorithmโ€”the first to integrate a mean-median estimator into cascading bandits. MSUCB achieves the optimal $O(sqrt{T})$ regret bound under clean feedback and maintains strong robustness even under arbitrary levels of corruption. Crucially, the estimator automatically filters out anomalous clicks each round, accelerating policy convergence. Experiments on real-world datasets demonstrate that MSUCB reduces cumulative regret by 97.35% in corruption-free settings and by 91.60% under corruption, significantly outperforming existing methods. Our work provides both theoretical guarantees and a practical solution for robust OLTR.

Technology Category

Application Category

๐Ÿ“ Abstract
Online learning to rank (OLTR) studies how to recommend a short ranked list of items from a large pool and improves future rankings based on user clicks. This setting is commonly modeled as cascading bandits, where the objective is to maximize the likelihood that the user clicks on at least one of the presented items across as many timesteps as possible. However, such systems are vulnerable to click fraud and other manipulations (i.e., corruption), where bots or paid click farms inject corrupted feedback that misleads the learning process and degrades user experience. In this paper, we propose MSUCB, a robust algorithm that incorporates a novel mean-of-medians estimator, which to our knowledge is applied to bandits with corruption setting for the first time. This estimator behaves like a standard mean in the absence of corruption, so no cost is paid for robustness. Under corruption, the median step filters out outliers and corrupted samples, keeping the estimate close to its true value. Updating this estimate at every round further accelerates empirical convergence in experiments. Hence, MSUCB achieves optimal logarithmic regret in the absence of corruption and degrades gracefully under corruptions, with regret increasing only by an additive term tied to the total corruption. Comprehensive and extensive experiments on real-world datasets further demonstrate that our approach consistently outperforms prior methods while maintaining strong robustness. In particular, it achieves a (97.35%) and a (91.60%) regret improvement over two state-of-the-art methods.
Problem

Research questions and friction points this paper is trying to address.

Online learning to rank is vulnerable to click fraud and manipulations
Cascading bandits need robustness against corrupted feedback from bots
Existing methods degrade under corruption, requiring graceful performance degradation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses mean-of-medians estimator for robustness
Filters outliers with median step under corruption
Achieves logarithmic regret with graceful degradation
๐Ÿ”Ž Similar Papers
No similar papers found.