Fine-Tuning Diffusion-Based Recommender Systems via Reinforcement Learning with Reward Function Optimization

πŸ“… 2025-11-10
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the high fine-tuning cost and diminishing returns of diffusion-based recommender systems, this paper proposes ReFiTβ€”a novel framework that first formulates the denoising process as a task-aligned Markov Decision Process (MDP). ReFiT introduces a co-aware direct reward function, eliminating reliance on noisy external reward models. It employs policy gradient optimization to refine denoising trajectories, maximizing the exact log-likelihood of observed user-item interactions. Evaluated on multiple real-world datasets, ReFiT achieves up to 36.3% relative improvement over the strongest baseline in sequential recommendation, with linear computational complexity and strong generalization across diverse diffusion recommendation settings. Its core innovation lies in explicitly modeling fine-grained denoising control in diffusion recommenders as a reinforcement learning problem, enabling end-to-end, proxy-free, and precise fine-tuning.

Technology Category

Application Category

πŸ“ Abstract
Diffusion models recently emerged as a powerful paradigm for recommender systems, offering state-of-the-art performance by modeling the generative process of user-item interactions. However, training such models from scratch is both computationally expensive and yields diminishing returns once convergence is reached. To remedy these challenges, we propose ReFiT, a new framework that integrates Reinforcement learning (RL)-based Fine-Tuning into diffusion-based recommender systems. In contrast to prior RL approaches for diffusion models depending on external reward models, ReFiT adopts a task-aligned design: it formulates the denoising trajectory as a Markov decision process (MDP) and incorporates a collaborative signal-aware reward function that directly reflects recommendation quality. By tightly coupling the MDP structure with this reward signal, ReFiT empowers the RL agent to exploit high-order connectivity for fine-grained optimization, while avoiding the noisy or uninformative feedback common in naive reward designs. Leveraging policy gradient optimization, ReFiT maximizes exact log-likelihood of observed interactions, thereby enabling effective post hoc fine-tuning of diffusion recommenders. Comprehensive experiments on wide-ranging real-world datasets demonstrate that the proposed ReFiT framework (a) exhibits substantial performance gains over strong competitors (up to 36.3% on sequential recommendation), (b) demonstrates strong efficiency with linear complexity in the number of users or items, and (c) generalizes well across multiple diffusion-based recommendation scenarios. The source code and datasets are publicly available at https://anonymous.4open.science/r/ReFiT-4C60.
Problem

Research questions and friction points this paper is trying to address.

Fine-tuning diffusion recommenders via reinforcement learning to improve performance
Optimizing reward functions to directly reflect recommendation quality metrics
Enabling efficient post-training adaptation of diffusion-based recommendation systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement learning fine-tunes diffusion recommender systems
MDP formulation with collaborative signal-aware reward function
Policy gradient optimization maximizes interaction log-likelihood
πŸ”Ž Similar Papers
No similar papers found.
Y
Yu Hou
School of Mathematics and Computing (Computational Science and Engineering), Yonsei University, Seoul 03722, Republic of Korea
H
Hua Li
Department of Industrial Engineering, Yonsei University, Seoul 03722, Republic of Korea
H
Ha Young Kim
Graduate School of Information, Yonsei University, Seoul 03722, Republic of Korea
Won-Yong Shin
Won-Yong Shin
Professor, CSE at Yonsei University
data miningmachine learninginformation theorymobile computingwireless networking