🤖 AI Summary
This work addresses the high computational complexity and inefficiency of Transformer-based sequential recommendation models when handling long user interaction histories. To this end, the authors propose a general-purpose personalized compression mechanism that leverages learnable personalized tokens to efficiently compress extensive historical interactions, which are then fused with recent user behaviors to generate recommendations. The approach is compatible with mainstream Transformer architectures such as HSTU and HLLM. Extensive experiments across multiple benchmark models demonstrate that the proposed method significantly reduces computational overhead while maintaining or even improving recommendation accuracy, effectively balancing efficiency and performance.
📝 Abstract
Recent years have witnessed success of sequential modeling, generative recommender, and large language model for recommendation. Though the scaling law has been validated for sequential models, it showed inefficiency in computational capacity when considering real-world applications like recommendation, due to the non-linear(quadratic) increasing nature of the transformer model. To improve the efficiency of the sequential model, we introduced a novel approach to sequential recommendation that leverages personalization techniques to enhance efficiency and performance. Our method compresses long user interaction histories into learnable tokens, which are then combined with recent interactions to generate recommendations. This approach significantly reduces computational costs while maintaining high recommendation accuracy. Our method could be applied to existing transformer based recommendation models, e.g., HSTU and HLLM. Extensive experiments on multiple sequential models demonstrate its versatility and effectiveness. Source code is available at \href{https://github.com/facebookresearch/PerSRec}{https://github.com/facebookresearch/PerSRec}.