🤖 AI Summary
Recommender systems face the challenge of jointly optimizing diversity, novelty, and click relevance, while existing exploration strategies often rely either on stochastic perturbations or excessive dependence on large language models (LLMs). To address this, we propose LAAC: a lightweight adaptive actor-critic framework that leverages an LLM as a reference policy to generate high-quality, novel candidates, while employing a compact, learnable policy network optimized continuously on real user interaction data. LAAC introduces a bilevel optimization mechanism and a regularization term explicitly encouraging exploration of underexposed items, thereby stabilizing value estimation and mitigating LLM-induced biases. Crucially, LAAC requires no LLM fine-tuning. Extensive experiments on multiple real-world datasets demonstrate significant improvements—+12.3% in diversity, +18.7% in novelty, and +5.2% in NDCG@10—while exhibiting strong robustness under long-tail and data-imbalanced conditions.
📝 Abstract
In recommendation systems, diversity and novelty are essential for capturing varied user preferences and encouraging exploration, yet many systems prioritize click relevance. While reinforcement learning (RL) has been explored to improve diversity, it often depends on random exploration that may not align with user interests. We propose LAAC (LLM-guided Adversarial Actor Critic), a novel method that leverages large language models (LLMs) as reference policies to suggest novel items, while training a lightweight policy to refine these suggestions using system-specific data. The method formulates training as a bilevel optimization between actor and critic networks, enabling the critic to selectively favor promising novel actions and the actor to improve its policy beyond LLM recommendations. To mitigate overestimation of unreliable LLM suggestions, we apply regularization that anchors critic values for unexplored items close to well-estimated dataset actions. Experiments on real-world datasets show that LAAC outperforms existing baselines in diversity, novelty, and accuracy, while remaining robust on imbalanced data, effectively integrating LLM knowledge without expensive fine-tuning.