MALLOC: Benchmarking the Memory-aware Long Sequence Compression for Large Sequential Recommendation

📅 2026-01-28
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high memory overhead incurred by large-scale sequential recommendation systems when processing long user behavior sequences—a challenge often overlooked by existing approaches that predominantly focus on accuracy while neglecting storage costs. To bridge this gap, we propose MALLOC, the first memory-aware benchmark for comprehensive evaluation of long-sequence compression techniques tailored to recommender systems. MALLOC systematically integrates and adapts compression strategies originally developed for large language models, embedding them into mainstream sequential recommendation architectures. Through reproducible, multi-dimensional assessments encompassing accuracy, efficiency, and computational complexity, MALLOC establishes a standardized framework for evaluating memory-performance trade-offs, thereby filling a critical void in systematic benchmarking and demonstrating its effectiveness in balancing memory efficiency with model performance.

Technology Category

Application Category

📝 Abstract
The scaling law, which indicates that model performance improves with increasing dataset and model capacity, has fueled a growing trend in expanding recommendation models in both industry and academia. However, the advent of large-scale recommenders also brings significantly higher computational costs, particularly under the long-sequence dependencies inherent in the user intent of recommendation systems. Current approaches often rely on pre-storing the intermediate states of the past behavior for each user, thereby reducing the quadratic re-computation cost for the following requests. Despite their effectiveness, these methods often treat memory merely as a medium for acceleration, without adequately considering the space overhead it introduces. This presents a critical challenge in real-world recommendation systems with billions of users, each of whom might initiate thousands of interactions and require massive memory for state storage. Fortunately, there have been several memory management strategies examined for compression in LLM, while most have not been evaluated on the recommendation task. To mitigate this gap, we introduce MALLOC, a comprehensive benchmark for memory-aware long sequence compression. MALLOC presents a comprehensive investigation and systematic classification of memory management techniques applicable to large sequential recommendations. These techniques are integrated into state-of-the-art recommenders, enabling a reproducible and accessible evaluation platform. Through extensive experiments across accuracy, efficiency, and complexity, we demonstrate the holistic reliability of MALLOC in advancing large-scale recommendation. Code is available at https://anonymous.4open.science/r/MALLOC.
Problem

Research questions and friction points this paper is trying to address.

memory overhead
long sequence recommendation
large-scale recommender systems
memory-aware compression
sequential recommendation
Innovation

Methods, ideas, or system contributions that make the work stand out.

memory-aware compression
long sequence recommendation
benchmark
memory management
large-scale recommender systems
🔎 Similar Papers
No similar papers found.