Faster and Memory-Efficient Training of Sequential Recommendation Models for Large Catalogs

📅 2025-08-13
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the excessive GPU memory consumption and computational bottlenecks—stemming from standard cross-entropy loss in Transformer-based sequential recommendation models under large item catalogs—where memory scales linearly with item count, batch size, and sequence length, this paper proposes Contrastive Cross-Entropy (CCE) loss. CCE integrates cross-entropy with efficient negative sampling and leverages a custom Triton GPU kernel for high-throughput, memory-aware computation. Our approach reduces GPU memory usage by over 10×, enabling larger batch sizes and more negative samples; it makes training feasible on industrial-scale 40 GB GPUs. Empirically, CCE achieves a 2× speedup in training while improving recommendation accuracy in large-catalog settings. The implementation is open-sourced and demonstrates strong scalability across diverse catalog sizes and model configurations.

Technology Category

Application Category

📝 Abstract
Sequential recommendations (SR) with transformer-based architectures are widely adopted in real-world applications, where SR models require frequent retraining to adapt to ever-changing user preferences. However, training transformer-based SR models often encounters a high computational cost associated with scoring extensive item catalogs, often exceeding thousands of items. This occurs mainly due to the use of cross-entropy loss, where peak memory scales proportionally to catalog size, batch size, and sequence length. Recognizing this, practitioners in the field of recommendation systems typically address memory consumption by integrating the cross-entropy (CE) loss with negative sampling, thereby reducing the explicit memory demands of the final layer. However, a small number of negative samples would degrade model performance, and as we demonstrate in our work, increasing the number of negative samples and the batch size further improves the model's performance, but rapidly starts to exceed industrial GPUs'size (~40Gb). In this work, we introduce the CCE- method, which offers a GPU-efficient implementation of the CE loss with negative sampling. Our method accelerates training by up to two times while reducing memory consumption by more than 10 times. Leveraging the memory savings afforded by using CCE- for model training, it becomes feasible to enhance its accuracy on datasets with a large item catalog compared to those trained with original PyTorch-implemented loss functions. Finally, we perform an analysis of key memory-related hyperparameters and highlight the necessity of a delicate balance among these factors. We demonstrate that scaling both the number of negative samples and batch size leads to better results rather than maximizing only one of them. To facilitate further adoption of CCE-, we release a Triton kernel that efficiently implements the proposed method.
Problem

Research questions and friction points this paper is trying to address.

Reducing computational cost of training transformer-based sequential recommendation models
Overcoming GPU memory limitations when scaling negative samples and batch size
Improving training efficiency for large-catalog sequential recommendation systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

CCE- method reduces GPU memory consumption significantly
CCE- accelerates training speed by up to two times
CCE- enables scaling negative samples and batch size
🔎 Similar Papers
2024-01-03ACM Conference on Recommender SystemsCitations: 5
M
Maxim Zhelnin
Skolkovo Institute of Technology
D
Dmitry Redko
Skolkovo Institute of Technology
V
Volkov Daniil
Skolkovo Institute of Technology
A
A. Volodkevich
Sber AI Lab, Skolkovo Institute of Technology
P
P. Sokerin
Skolkovo Institute of Technology
V
Valeriy Shevchenko
Skolkovo Institute of Technology, IVI
E
Egor Shvetsov
Skolkovo Institute of Technology
Alexey Vasilev
Alexey Vasilev
Sber AI Lab; HSE University; MSU
Machine LearningData scienceRecommender SystemNLP
D
Darya Denisova
Sber AI Lab
R
Ruslan Izmailov
Sber
Alexey Zaytsev
Alexey Zaytsev
Associate professor at BIMSA
Deep learningMachine learningStatistics