🤖 AI Summary
To address the excessive memory bandwidth demand (>1 TB/s), severe data movement overhead, and load imbalance in inference of terabyte-scale deep learning recommendation models (DLRMs), this work proposes a hardware–software co-designed compute-in-memory (CIM) system. First, it introduces a novel statistical-aware sharding and tensor-train (TT) compression co-optimization method to enable zero-data-movement in-memory inference over embedding tables. Second, it proposes a mixed-integer programming (MIP)-driven dynamic core configuration mechanism for adaptive scheduling between memory- and compute-centric cores. Third, it designs dedicated hardware accelerators for TT decomposition and reconstruction, achieving—within a CIM architecture—the first high-accuracy, high-ratio (>100×) embedding compression. Experiments show 55.77× inference speedup over CPU–DRAM systems with zero accuracy loss, 13.35× higher energy efficiency than GPUs, and real-time single-server deployment of industrial-scale TB-level DLRMs.
📝 Abstract
Deep Learning Recommendation Models (DLRMs) play a crucial role in delivering personalized content across web applications such as social networking and video streaming. However, with improvements in performance, the parameter size of DLRMs has grown to terabyte (TB) scales, accompanied by memory bandwidth demands exceeding TB/s levels. Furthermore, the workload intensity within the model varies based on the target mechanism, making it difficult to build an optimized recommendation system. In this paper, we propose SCRec, a scalable computational storage recommendation system that can handle TB-scale industrial DLRMs while guaranteeing high bandwidth requirements. SCRec utilizes a software framework that features a mixed-integer programming (MIP)-based cost model, efficiently fetching data based on data access patterns and adaptively configuring memory-centric and compute-centric cores. Additionally, SCRec integrates hardware acceleration cores to enhance DLRM computations, particularly allowing for the high-performance reconstruction of approximated embedding vectors from extremely compressed tensor-train (TT) format. By combining its software framework and hardware accelerators, while eliminating data communication overhead by being implemented on a single server, SCRec achieves substantial improvements in DLRM inference performance. It delivers up to 55.77$ imes$ speedup compared to a CPU-DRAM system with no loss in accuracy and up to 13.35$ imes$ energy efficiency gains over a multi-GPU system.