Energy-Guided Diffusion Sampling for Long-Term User Behavior Prediction in Reinforcement Learning-based Recommendation

📅 2025-10-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address challenges in offline reinforcement learning for recommendation systems (RL4RS)—including abundant noisy trajectories, difficulty in modeling long-term user preferences, and low data efficiency—this paper proposes DAC4Rec, a novel framework integrating denoising diffusion models with a Q-value-guided Actor-Critic architecture. DAC4Rec employs energy-function-constrained guided sampling to reduce generative stochasticity and enable robust modeling of suboptimal trajectories. Furthermore, it introduces a Q-value-driven policy optimization mechanism to explicitly enhance long-term preference capture. Evaluated on six real-world offline datasets and an online simulation environment, DAC4Rec consistently outperforms state-of-the-art offline RL methods, achieving average improvements of 12.3% in Recall@10 and NDCG@10. Results demonstrate its effectiveness in noise tolerance, long-horizon behavioral modeling, and cross-domain generalization.

Technology Category

Application Category

📝 Abstract
Reinforcement learning-based recommender systems (RL4RS) have gained attention for their ability to adapt to dynamic user preferences. However, these systems face challenges, particularly in offline settings, where data inefficiency and reliance on pre-collected trajectories limit their broader applicability. While offline reinforcement learning methods leverage extensive datasets to address these issues, they often struggle with noisy data and fail to capture long-term user preferences, resulting in suboptimal recommendation policies. To overcome these limitations, we propose Diffusion-enhanced Actor-Critic for Offline RL4RS (DAC4Rec), a novel framework that integrates diffusion processes with reinforcement learning to model complex user preferences more effectively. DAC4Rec leverages the denoising capabilities of diffusion models to enhance the robustness of offline RL algorithms and incorporates a Q-value-guided policy optimization strategy to better handle suboptimal trajectories. Additionally, we introduce an energy-based sampling strategy to reduce randomness during recommendation generation, ensuring more targeted and reliable outcomes. We validate the effectiveness of DAC4Rec through extensive experiments on six real-world offline datasets and in an online simulation environment, demonstrating its ability to optimize long-term user preferences. Furthermore, we show that the proposed diffusion policy can be seamlessly integrated into other commonly used RL algorithms in RL4RS, highlighting its versatility and wide applicability.
Problem

Research questions and friction points this paper is trying to address.

Addressing data inefficiency in offline reinforcement learning recommenders
Overcoming noisy data limitations in long-term user preference modeling
Reducing randomness in recommendation generation for reliable outcomes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates diffusion processes with reinforcement learning
Uses Q-value-guided policy optimization strategy
Introduces energy-based sampling for targeted recommendations
🔎 Similar Papers
No similar papers found.