Enhancing Bandit Algorithms with LLMs for Time-varying User Preferences in Streaming Recommendations

📅 2026-02-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing Bandit-based recommendation methods, which struggle to model the time-varying nature of user preferences and suffer from inefficient exploration during the initial phase of online learning. To overcome these challenges, the authors propose HyperBandit+, a novel framework that integrates temporal features into a hypernetwork to dynamically capture the evolution of user preferences. Furthermore, they introduce a large language model (LLM)-assisted warm-start mechanism that enhances early-stage exploration through multi-step data augmentation, while incorporating low-rank decomposition to ensure real-time inference efficiency. Extensive experiments on real-world datasets demonstrate that HyperBandit+ significantly outperforms state-of-the-art approaches in terms of cumulative reward. Theoretical analysis further establishes that the proposed method achieves a sublinear regret upper bound.

Technology Category

Application Category

📝 Abstract
In real-world streaming recommender systems, user preferences evolve dynamically over time. Existing bandit-based methods treat time merely as a timestamp, neglecting its explicit relationship with user preferences and leading to suboptimal performance. Moreover, online learning methods often suffer from inefficient exploration-exploitation during the early online phase. To address these issues, we propose HyperBandit+, a novel contextual bandit policy that integrates a time-aware hypernetwork to adapt to time-varying user preferences and employs a large language model-assisted warm-start mechanism (LLM Start) to enhance exploration-exploitation efficiency in the early online phase. Specifically, HyperBandit+ leverages a neural network that takes time features as input and generates parameters for estimating time-varying rewards by capturing the correlation between time and user preferences. Additionally, the LLM Start mechanism employs multi-step data augmentation to simulate realistic interaction data for effective offline learning, providing warm-start parameters for the bandit policy in the early online phase. To meet real-time streaming recommendation demands, we adopt low-rank factorization to reduce hypernetwork training complexity. Theoretically, we rigorously establish a sublinear regret upper bound that accounts for both the hypernetwork and the LLM warm-start mechanism. Extensive experiments on real-world datasets demonstrate that HyperBandit+ consistently outperforms state-of-the-art baselines in terms of accumulated rewards.
Problem

Research questions and friction points this paper is trying to address.

time-varying user preferences
streaming recommendations
bandit algorithms
exploration-exploitation
online learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

time-aware hypernetwork
LLM warm-start
contextual bandits
streaming recommendations
low-rank factorization
🔎 Similar Papers
No similar papers found.