Learning to Reflect and Correct: Towards Better Decoding Trajectories for Large-Scale Generative Recommendation

📅 2026-02-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitation of generative recommender systems, which lack a correction mechanism during single-pass decoding and are thus prone to quality degradation due to error propagation from early-stage biases. To mitigate this, we propose the Generative Recommendation with Correction (GRC) framework, which introduces, for the first time, a structured reflection-and-correction mechanism into generative recommendation through a three-stage decoding pipeline: generation, reflection, and correction. GRC incorporates multi-granularity reflection templates, a reward function fusing token-level and trajectory-level signals, and an entropy-guided dynamic correction scheduling strategy (EGRS), jointly optimized via GRPO reinforcement learning. Extensive experiments demonstrate that GRC significantly outperforms six state-of-the-art baselines across multiple real-world datasets, achieving up to a 15.74% improvement in recommendation quality. Online A/B tests further show a 1.79% increase in ad revenue with only marginal latency overhead.

Technology Category

Application Category

📝 Abstract
Generative Recommendation (GR) has become a promising paradigm for large-scale recommendation systems. However, existing GR models typically perform single-pass decoding without explicit refinement, causing early deviations to accumulate and ultimately degrade recommendation quality. To tackle this problem, we propose GRC, which is, to our knowledge, the first structured reflection-correction framework for GR that extends standard decoding into a Generation-Reflection-Correction (GRC) process. Concretely, GRC introduces a supervised reflection-correction template that decomposes the decoding process into initial draft generation, multi-granular reflection, and reflection-guided correction, thereby enabling structured reflection and correction in the semantic token space. To further explore the enlarged refinement space introduced by the GRC process, we optimize the entire GRC trajectory with GRPO-based reinforcement learning, under a carefully designed reward function with token-level and trajectory-level signals. For efficient online serving, we propose an Entropy-Guided Reflection Scheduling (EGRS) strategy that dynamically allocates more correction budget to high-uncertainty decoding trajectories during beam search. Extensive experiments on real-world datasets show that GRC consistently outperforms six state-of-the-art baselines by up to 15.74%, and online A/B tests demonstrate its substantial practical value in large-scale industrial recommendation, delivering a 1.79% lift in advertising revenue with only modest latency overhead.
Problem

Research questions and friction points this paper is trying to address.

Generative Recommendation
Decoding Trajectory
Reflection-Correction
Recommendation Quality
Bias Accumulation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generative Recommendation
Reflection-Correction Framework
Reinforcement Learning
Decoding Trajectory Optimization
Entropy-Guided Scheduling
🔎 Similar Papers
No similar papers found.
H
Haibo Xing
Alibaba International Digital Commerce Group
Hao Deng
Hao Deng
Engineer
recommendation system
L
Lingyu Mu
Alibaba International Digital Commerce Group
Jinxin Hu
Jinxin Hu
Alibaba
Y
Yu Zhang
Alibaba International Digital Commerce Group
X
Xiaoyi Zeng
Alibaba International Digital Commerce Group
J
Jing Zhang
School of Computer Science, Wuhan University