Robust Post-Training for Generative Recommenders: Why Exponential Reward-Weighted SFT Outperforms RLHF

πŸ“… 2026-03-10
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing generative recommender systems struggle to effectively align with user preferences under realistic constraints such as noisy feedback, unreliable reward models, missing propensity scores, and the absence of online interaction. This work proposes an exponential reward-weighted supervised fine-tuning (SFT) approach that directly optimizes the policy using observed rewards, eliminating the need to learn a reward model or estimate propensity scores. Fully offline and immune to reward hacking, the method provides the first theoretical guarantee for policy improvement in generative recommendation under noisy rewards. It reveals that the temperature parameter Ξ» explicitly governs the trade-off between robustness and performance and introduces a theoretically grounded, interpretable regularization mechanism. Experiments across three public and one proprietary dataset demonstrate significant improvements over four strong baselines, confirming the method’s simplicity, scalability, and consistent advantage over RLHF-based approaches.

Technology Category

Application Category

πŸ“ Abstract
Aligning generative recommender systems to user preferences via post-training is critical for closing the gap between next-item prediction and actual recommendation quality. Existing post-training methods are ill-suited for production-scale systems: RLHF methods reward hack due to noisy user feedback and unreliable reward models, offline RL alternatives require propensity scores that are unavailable, and online interaction is infeasible. We identify exponential reward-weighted SFT with weights $w = \exp(r/\lambda)$ as uniquely suited to this setting, and provide the theoretical and empirical foundations that explain why. By optimizing directly on observed rewards without querying a learned reward model, the method is immune to reward hacking, requires no propensity scores, and is fully offline. We prove the first policy improvement guarantees for this setting under noisy rewards, showing that the gap scales only logarithmically with catalog size and remains informative even for large item catalogs. Crucially, we show that temperature $\lambda$ explicitly and quantifiably controls the robustness-improvement tradeoff, providing practitioners with a single interpretable regularization hyperparameter with theoretical grounding. Experiments on three open-source and one proprietary dataset against four baselines confirm that exponential reward weighting is simple, scalable, and consistently outperforms RLHF-based alternatives.
Problem

Research questions and friction points this paper is trying to address.

generative recommender systems
post-training
reward hacking
offline learning
user preferences
Innovation

Methods, ideas, or system contributions that make the work stand out.

exponential reward-weighted SFT
generative recommender systems
reward hacking
offline post-training
policy improvement guarantee
πŸ”Ž Similar Papers
No similar papers found.