🤖 AI Summary
This work addresses the vulnerability of off-policy generative recommendation to low-quality historical data, which often leads to policy model collapse. To mitigate this issue, the authors propose a distributionally robust policy optimization framework that leverages Optimistic Distributionally Robust Optimization (Optimistic DRO) to identify and recover high-quality latent distributions within the behavioral policy, effectively filtering out noise samples that induce divergence. Theoretical analysis reveals that negative gradient updates can cause exponentially escalating instability, and the study establishes—for the first time—that a hard filtering mechanism constitutes the exact solution to the optimistic DRO problem. Extensive experiments demonstrate that the proposed method achieves state-of-the-art performance on mixed-quality recommendation benchmarks, significantly alleviating model collapse and enhancing recommendation quality.
📝 Abstract
Policy-based Reinforcement Learning (RL) has established itself as the dominant paradigm in generative recommendation for optimizing sequential user interactions. However, when applied to offline historical logs, these methods suffer a critical failure: the dominance of low-quality data induces severe model collapse. We first establish the Divergence Theory of Repulsive Optimization, revealing that negative gradient updates inherently trigger exponential intensity explosion during off-policy training. This theory elucidates the inherent dilemma of existing methods, exposing their inability to reconcile variance reduction and noise imitation. To break this curse, we argue that the solution lies in rigorously identifying the latent high-quality distribution entangled within the noisy behavior policy. Accordingly, we reformulate the objective as an Optimistic Distributionally Robust Optimization (DRO) problem. Guided by this formulation, we propose Distributionally Robust Policy Optimization (DRPO). We prove that hard filtering is the exact solution to this DRO objective, enabling DRPO to optimally recover high-quality behaviors while strictly discarding divergence-inducing noise. Extensive experiments demonstrate that DRPO achieves state-of-the-art performance on mixed-quality recommendation benchmarks.