Rank-GRPO: Training LLM-based Conversational Recommender Systems with Reinforcement Learning

📅 2025-10-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address three critical issues in LLM-based conversational recommender systems—output boundary violations, malformed response formats, and sharp degradation in tail-item ranking quality—this paper proposes ConvRec-R1, an end-to-end trainable framework. Methodologically, it introduces (1) a Remap-Reflect-Adjust pipeline to construct high-quality, position-aligned demonstration data, and (2) Rank-GRPO, a novel reinforcement learning algorithm that treats recommendation rankings as atomic policy update units and redefines reward assignment via geometric-mean importance weighting to eliminate non-causal credit assignment, thereby enhancing training stability and fine-grained ranking capability. Experiments on Reddit-v2 demonstrate that ConvRec-R1 achieves faster convergence and significantly outperforms conventional GRPO baselines in recall and NDCG. The code and datasets are publicly available.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are reshaping the recommender system paradigm by enabling users to express preferences and receive recommendations through conversations. Yet, aligning LLMs to the recommendation task remains challenging: pretrained LLMs often generate out-of-catalog items, violate required output formats, and their ranking quality degrades sharply toward the end of the generated list. To this end, we propose ConvRec-R1, a two-stage framework for end-to-end training of LLM-based conversational recommender systems. In Stage 1, we construct a behavioral-cloning dataset with a Remap-Reflect-Adjust pipeline, which produces high-quality, catalog-grounded demonstrations from powerful blackbox LLMs to warm-start the RL training. In Stage 2, we propose Rank-GRPO, a principled extension of group relative policy optimization (GRPO) tailored to tasks with rank-style outputs. Rank-GRPO treats each rank in the recommendation list as the unit instead of token (too fine-grained) or sequence (too coarse), redefining rewards to remove non-causal credit assignment and introducing a rank-level importance ratio based on the geometric mean of rank-wise token probabilities to stabilize policy updates. Experiments on the public Reddit-v2 dataset show that ConvRec-R1 converges faster and achieves higher Recall and NDCG than GRPO-style baselines. Code and datasets are released at https://github.com/yaochenzhu/Rank-GRPO.
Problem

Research questions and friction points this paper is trying to address.

Aligning LLMs to prevent generating out-of-catalog items
Improving ranking quality degradation in recommendation lists
Stabilizing policy updates for rank-style output tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Two-stage framework trains conversational recommender systems
Behavioral cloning pipeline generates catalog-grounded demonstrations
Rank-GRPO optimizes policy using rank-level reward and ratio
🔎 Similar Papers
No similar papers found.