π€ AI Summary
Existing ranking methods rely on a generator-evaluator two-stage paradigm, but scaling the candidate set fails to overcome combinatorial search bottlenecks, leading to performance saturation. This paper proposes a pure generative, single-stage large-ranking model that eliminates the evaluator and directly produces high-quality ranked lists in an end-to-end manner. Our key contributions are: (1) a theoretical proof showing that generator-only models incur smaller approximation error than two-stage counterparts; (2) group-wise relative optimization, which leverages a reward model to construct intra-group relative reference policies, thereby enhancing list-level ranking fidelity; and (3) a scalable generative architecture coupled with user-feedback-driven reward modeling. Extensive experiments on public benchmarks and large-scale online A/B tests demonstrate significant improvements over state-of-the-art methods, validating the modelβs robustness and effectiveness in both offline evaluation and live production environments.
π Abstract
Mainstream ranking approaches typically follow a Generator-Evaluator two-stage paradigm, where a generator produces candidate lists and an evaluator selects the best one. Recent work has attempted to enhance performance by expanding the number of candidate lists, for example, through multi-generator settings. However, ranking involves selecting a recommendation list from a combinatorially large space. Simply enlarging the candidate set remains ineffective, and performance gains quickly saturate. At the same time, recent advances in large recommendation models have shown that end-to-end one-stage models can achieve promising performance with the expectation of scaling laws. Motivated by this, we revisit ranking from a generator-only one-stage perspective. We theoretically prove that, for any (finite Multi-)Generator-Evaluator model, there always exists a generator-only model that achieves strictly smaller approximation error to the optimal ranking policy, while also enjoying scaling laws as its size increases. Building on this result, we derive an evidence upper bound of the one-stage optimization objective, from which we find that one can leverage a reward model trained on real user feedback to construct a reference policy in a group-relative manner. This reference policy serves as a practical surrogate of the optimal policy, enabling effective training of a large generator-only ranker. Based on these insights, we propose GoalRank, a generator-only ranking framework. Extensive offline experiments on public benchmarks and large-scale online A/B tests demonstrate that GoalRank consistently outperforms state-of-the-art methods.