Breaking the Curse of Repulsion: Optimistic Distributionally Robust Policy Optimization for Off-Policy Generative Recommendation

📅 2026-02-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the vulnerability of off-policy generative recommendation to low-quality historical data, which often leads to policy model collapse. To mitigate this issue, the authors propose a distributionally robust policy optimization framework that leverages Optimistic Distributionally Robust Optimization (Optimistic DRO) to identify and recover high-quality latent distributions within the behavioral policy, effectively filtering out noise samples that induce divergence. Theoretical analysis reveals that negative gradient updates can cause exponentially escalating instability, and the study establishes—for the first time—that a hard filtering mechanism constitutes the exact solution to the optimistic DRO problem. Extensive experiments demonstrate that the proposed method achieves state-of-the-art performance on mixed-quality recommendation benchmarks, significantly alleviating model collapse and enhancing recommendation quality.

Technology Category

Application Category

📝 Abstract
Policy-based Reinforcement Learning (RL) has established itself as the dominant paradigm in generative recommendation for optimizing sequential user interactions. However, when applied to offline historical logs, these methods suffer a critical failure: the dominance of low-quality data induces severe model collapse. We first establish the Divergence Theory of Repulsive Optimization, revealing that negative gradient updates inherently trigger exponential intensity explosion during off-policy training. This theory elucidates the inherent dilemma of existing methods, exposing their inability to reconcile variance reduction and noise imitation. To break this curse, we argue that the solution lies in rigorously identifying the latent high-quality distribution entangled within the noisy behavior policy. Accordingly, we reformulate the objective as an Optimistic Distributionally Robust Optimization (DRO) problem. Guided by this formulation, we propose Distributionally Robust Policy Optimization (DRPO). We prove that hard filtering is the exact solution to this DRO objective, enabling DRPO to optimally recover high-quality behaviors while strictly discarding divergence-inducing noise. Extensive experiments demonstrate that DRPO achieves state-of-the-art performance on mixed-quality recommendation benchmarks.
Problem

Research questions and friction points this paper is trying to address.

off-policy generative recommendation
model collapse
low-quality data
distributional robustness
offline reinforcement learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Distributionally Robust Optimization
Off-Policy Learning
Generative Recommendation
Model Collapse
Hard Filtering
🔎 Similar Papers
No similar papers found.
J
Jie Jiang
Tencent Inc, China
Y
Yusen Huo
Tencent Inc, China
X
Xiangxin Zhan
Tencent Inc, China
C
Changping Wang
Tencent Inc, China
Jun Zhang
Jun Zhang
Tencent
AI codecimage/video generationmedical image analysis