🤖 AI Summary
To address the excessive GPU memory consumption and computational bottlenecks—stemming from standard cross-entropy loss in Transformer-based sequential recommendation models under large item catalogs—where memory scales linearly with item count, batch size, and sequence length, this paper proposes Contrastive Cross-Entropy (CCE) loss. CCE integrates cross-entropy with efficient negative sampling and leverages a custom Triton GPU kernel for high-throughput, memory-aware computation. Our approach reduces GPU memory usage by over 10×, enabling larger batch sizes and more negative samples; it makes training feasible on industrial-scale 40 GB GPUs. Empirically, CCE achieves a 2× speedup in training while improving recommendation accuracy in large-catalog settings. The implementation is open-sourced and demonstrates strong scalability across diverse catalog sizes and model configurations.
📝 Abstract
Sequential recommendations (SR) with transformer-based architectures are widely adopted in real-world applications, where SR models require frequent retraining to adapt to ever-changing user preferences. However, training transformer-based SR models often encounters a high computational cost associated with scoring extensive item catalogs, often exceeding thousands of items. This occurs mainly due to the use of cross-entropy loss, where peak memory scales proportionally to catalog size, batch size, and sequence length. Recognizing this, practitioners in the field of recommendation systems typically address memory consumption by integrating the cross-entropy (CE) loss with negative sampling, thereby reducing the explicit memory demands of the final layer. However, a small number of negative samples would degrade model performance, and as we demonstrate in our work, increasing the number of negative samples and the batch size further improves the model's performance, but rapidly starts to exceed industrial GPUs'size (~40Gb). In this work, we introduce the CCE- method, which offers a GPU-efficient implementation of the CE loss with negative sampling. Our method accelerates training by up to two times while reducing memory consumption by more than 10 times. Leveraging the memory savings afforded by using CCE- for model training, it becomes feasible to enhance its accuracy on datasets with a large item catalog compared to those trained with original PyTorch-implemented loss functions. Finally, we perform an analysis of key memory-related hyperparameters and highlight the necessity of a delicate balance among these factors. We demonstrate that scaling both the number of negative samples and batch size leads to better results rather than maximizing only one of them. To facilitate further adoption of CCE-, we release a Triton kernel that efficiently implements the proposed method.