Large Language Model-Enhanced Reinforcement Learning for Diverse and Novel Recommendations

📅 2025-07-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Recommender systems face the challenge of jointly optimizing diversity, novelty, and click relevance, while existing exploration strategies often rely either on stochastic perturbations or excessive dependence on large language models (LLMs). To address this, we propose LAAC: a lightweight adaptive actor-critic framework that leverages an LLM as a reference policy to generate high-quality, novel candidates, while employing a compact, learnable policy network optimized continuously on real user interaction data. LAAC introduces a bilevel optimization mechanism and a regularization term explicitly encouraging exploration of underexposed items, thereby stabilizing value estimation and mitigating LLM-induced biases. Crucially, LAAC requires no LLM fine-tuning. Extensive experiments on multiple real-world datasets demonstrate significant improvements—+12.3% in diversity, +18.7% in novelty, and +5.2% in NDCG@10—while exhibiting strong robustness under long-tail and data-imbalanced conditions.

Technology Category

Application Category

📝 Abstract
In recommendation systems, diversity and novelty are essential for capturing varied user preferences and encouraging exploration, yet many systems prioritize click relevance. While reinforcement learning (RL) has been explored to improve diversity, it often depends on random exploration that may not align with user interests. We propose LAAC (LLM-guided Adversarial Actor Critic), a novel method that leverages large language models (LLMs) as reference policies to suggest novel items, while training a lightweight policy to refine these suggestions using system-specific data. The method formulates training as a bilevel optimization between actor and critic networks, enabling the critic to selectively favor promising novel actions and the actor to improve its policy beyond LLM recommendations. To mitigate overestimation of unreliable LLM suggestions, we apply regularization that anchors critic values for unexplored items close to well-estimated dataset actions. Experiments on real-world datasets show that LAAC outperforms existing baselines in diversity, novelty, and accuracy, while remaining robust on imbalanced data, effectively integrating LLM knowledge without expensive fine-tuning.
Problem

Research questions and friction points this paper is trying to address.

Enhancing recommendation diversity and novelty using LLMs
Reducing reliance on random exploration in RL systems
Mitigating overestimation of unreliable LLM suggestions
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-guided adversarial actor-critic framework
Bilevel optimization for selective novelty
Regularization anchors critic values reliably
🔎 Similar Papers
No similar papers found.
J
Jiin Woo
Carnegie Mellon University
Alireza Bagheri Garakani
Alireza Bagheri Garakani
Amazon
Tianchen Zhou
Tianchen Zhou
Amazon
Reinforcement LearningMulti-Armed BanditMulti-Objective Optimization
Z
Zhishen Huang
Amazon
Y
Yan Gao
Amazon