π€ AI Summary
To address the high computational cost and deployment challenges of large language models (LLMs) in sequential recommendation (SR), this paper proposes SLMRecβa knowledge distillation framework tailored for SR. We first identify substantial redundancy in LLM intermediate layers for sequence modeling, then design hierarchical attention distillation and a sequence-aware loss function. SLMRec employs a small language model (SLM) augmented with sequence modeling priors and is orthogonal to standard lightweight techniques (e.g., quantization and pruning). Theoretical analysis establishes a controllable upper bound on distillation error. Experiments demonstrate that SLMRec, using only 13% of the LLMβs parameters, outperforms LLM baselines across multiple SR benchmarks. Moreover, it achieves 6.6Γ faster training and 8.0Γ faster inference, striking a superior balance between efficiency and recommendation accuracy.
π Abstract
Sequential Recommendation (SR) task involves predicting the next item a user is likely to interact with, given their past interactions. The SR models examine the sequence of a user's actions to discern more complex behavioral patterns and temporal dynamics. Recent research demonstrates the great impact of LLMs on sequential recommendation systems, either viewing sequential recommendation as language modeling or serving as the backbone for user representation. Although these methods deliver outstanding performance, there is scant evidence of the necessity of a large language model and how large the language model is needed, especially in the sequential recommendation scene. Meanwhile, due to the huge size of LLMs, it is inefficient and impractical to apply a LLM-based model in real-world platforms that often need to process billions of traffic logs daily. In this paper, we explore the influence of LLMs' depth by conducting extensive experiments on large-scale industry datasets. Surprisingly, our motivational experiments reveal that most intermediate layers of LLMs are redundant, indicating that pruning the remaining layers can still maintain strong performance. Motivated by this insight, we empower small language models for SR, namely SLMRec, which adopt a simple yet effective knowledge distillation method. Moreover, SLMRec is orthogonal to other post-training efficiency techniques, such as quantization and pruning, so that they can be leveraged in combination. Comprehensive experimental results illustrate that the proposed SLMRec model attains the best performance using only 13% of the parameters found in LLM-based recommendation models while simultaneously achieving up to 6.6x and 8.0x speedups in training and inference time costs, respectively. Besides, we provide a theoretical justification for why small language models can perform comparably to large language models in SR.