π€ AI Summary
Existing contextual dueling bandit models struggle to capture usersβ preference comparisons based on historical consumption records, resulting in low-quality implicit feedback and limited learning efficiency. To address this, we propose a novel contextual dual-bandit framework that enables direct comparison between the current recommendation and previously consumed items, yielding high-information, zero-regret preference signals. To handle temporal dependencies in historical data, we integrate matrix concentration analysis with a brief randomized exploration strategy. Theoretically, our method achieves an $O(sqrt{T})$ regret bound. Empirical evaluations demonstrate that reusing historical items significantly reduces cumulative regret compared to baselines that only compare concurrently recommended items. Our key contribution is the first systematic incorporation of historical consumption items into the dueling query mechanism, substantially improving the reliability of implicit preference modeling and sample efficiency.
π Abstract
The contextual duelling bandit problem models adaptive recommender systems, where the algorithm presents a set of items to the user, and the user's choice reveals their preference. This setup is well suited for implicit choices users make when navigating a content platform, but does not capture other possible comparison queries. Motivated by the fact that users provide more reliable feedback after consuming items, we propose a new bandit model that can be described as follows. The algorithm recommends one item per time step; after consuming that item, the user is asked to compare it with another item chosen from the user's consumption history. Importantly, in our model, this comparison item can be chosen without incurring any additional regret, potentially leading to better performance. However, the regret analysis is challenging because of the temporal dependency in the user's history. To overcome this challenge, we first show that the algorithm can construct informative queries provided the history is rich, i.e., satisfies a certain diversity condition. We then show that a short initial random exploration phase is sufficient for the algorithm to accumulate a rich history with high probability. This result, proven via matrix concentration bounds, yields $O(sqrt{T})$ regret guarantees. Additionally, our simulations show that reusing past items for comparisons can lead to significantly lower regret than only comparing between simultaneously recommended items.