🤖 AI Summary
To address the high computational overhead in Transformer-based sequential recommendation—caused by discontinuous memory access in temporal encoding and redundant attention computation over long sequences—this paper proposes an efficient framework for modeling long user behavior sequences. The method introduces three key innovations: (1) a tunable exponential decay time encoder, inspired by the Ebbinghaus forgetting curve, to explicitly model the dynamic decay of user preferences; (2) a diagonal sliding sparse attention mechanism leveraging Toeplitz matrix symmetry, enabling hardware-friendly local-global temporal modeling; and (3) a decoder-only architecture optimized exclusively via matrix operations. Evaluated on four real-world datasets, the approach achieves state-of-the-art accuracy while accelerating training and inference by 4.74× and 6.18×, respectively—significantly improving practicality and scalability for long-sequence recommendation.
📝 Abstract
Sequential recommendation aims to model users' evolving preferences based on their historical interactions. Recent advances leverage Transformer-based architectures to capture global dependencies, but existing methods often suffer from high computational overhead, primarily due to discontinuous memory access in temporal encoding and dense attention over long sequences. To address these limitations, we propose FuXi-$γ$, a novel sequential recommendation framework that improves both effectiveness and efficiency through principled architectural design. FuXi-$γ$ adopts a decoder-only Transformer structure and introduces two key innovations: (1) An exponential-power temporal encoder that encodes relative temporal intervals using a tunable exponential decay function inspired by the Ebbinghaus forgetting curve. This encoder enables flexible modeling of both short-term and long-term preferences while maintaining high efficiency through continuous memory access and pure matrix operations. (2) A diagonal-sparse positional mechanism that prunes low-contribution attention blocks using a diagonal-sliding strategy guided by the persymmetry of Toeplitz matrix. Extensive experiments on four real-world datasets demonstrate that FuXi-$γ$ achieves state-of-the-art performance in recommendation quality, while accelerating training by up to 4.74$ imes$ and inference by up to 6.18$ imes$, making it a practical and scalable solution for long-sequence recommendation. Our code is available at https://github.com/Yeedzhi/FuXi-gamma.