π€ AI Summary
This work addresses the prior mismatch and linear redundancy inherent in the βnoise-to-dataβ paradigm of generative sequential recommendation by proposing FAVE, the first framework to integrate flow models into one-step generative recommendation. FAVE employs a two-stage training strategy: the first stage constructs a stable user preference space, while the second introduces a semantic anchor prior and global average velocity modeling, combined with a Jacobian-vector product (JVP) consistency constraint to compress multi-step generation trajectories into a single-step displacement. Evaluated on three benchmark datasets, FAVE achieves state-of-the-art recommendation performance while offering an order-of-magnitude improvement in inference efficiency, making it particularly suitable for low-latency applications.
π Abstract
Generative recommendation has emerged as a transformative paradigm for capturing the dynamic evolution of user intents in sequential recommendation. While flow-based methods improve the efficiency of diffusion models, they remain hindered by the ``Noise-to-Data'' paradigm, which introduces two critical inefficiencies: prior mismatch, where generation starts from uninformative noise, forcing a lengthy recovery trajectory; and linear redundancy, where iterative solvers waste computation on modeling deterministic preference transitions. To address these limitations, we propose a Flow-based Average Velocity Establishment (Fave) framework for one-step generation recommendation that learns a direct trajectory from an informative prior to the target distribution. Fave is structured via a progressive two-stage training strategy. In Stage 1, we establish a stable preference space through dual-end semantic alignment, applying constraints at both the source (user history) and target (next item) to prevent representation collapse. In Stage 2, we directly resolve the efficiency bottlenecks by introducing a semantic anchor prior, which initializes the flow with a masked embedding from the user's interaction history, providing an informative starting point. Then we learn a global average velocity, consolidating the multi-step trajectory into a single displacement vector, and enforce trajectory straightness via a JVP-based consistency constraint to ensure one-step generation. Extensive experiments on three benchmarks demonstrate that Fave not only achieves state-of-the-art recommendation performance but also delivers an order-of-magnitude improvement in inference efficiency, making it practical for latency-sensitive scenarios.