Deep Recommender Models Inference: Automatic Asymmetric Data Flow Optimization

πŸ“… 2025-07-02
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
In DLRM inference, the embedding layer’s random memory accesses to multi-size sparse embedding tables constitute a critical performance bottleneck. To address this, we propose a customized dataflow and asymmetric embedding table mapping framework tailored for Ascend AI accelerators. First, we design four single-core-efficient lookup strategies; second, we develop an automated multi-core mapping mechanism that accommodates heterogeneous embedding table size distributions, thereby reducing sensitivity to query distribution skew and enhancing system robustness. Evaluated on real-world workloads, our approach achieves 1.5×–6.5Γ— speedup over NVIDIA A100 and the default compiler baseline, and over 20Γ— acceleration under highly imbalanced table-size distributions. It significantly mitigates embedding lookup dependency on data distribution, establishing a scalable hardware-software co-optimization paradigm for efficient DLRM inference on multi-core SoCs.

Technology Category

Application Category

πŸ“ Abstract
Deep Recommender Models (DLRMs) inference is a fundamental AI workload accounting for more than 79% of the total AI workload in Meta's data centers. DLRMs' performance bottleneck is found in the embedding layers, which perform many random memory accesses to retrieve small embedding vectors from tables of various sizes. We propose the design of tailored data flows to speedup embedding look-ups. Namely, we propose four strategies to look up an embedding table effectively on one core, and a framework to automatically map the tables asymmetrically to the multiple cores of a SoC. We assess the effectiveness of our method using the Huawei Ascend AI accelerators, comparing it with the default Ascend compiler, and we perform high-level comparisons with Nvidia A100. Results show a speed-up varying from 1.5x up to 6.5x for real workload distributions, and more than 20x for extremely unbalanced distributions. Furthermore, the method proves to be much more independent of the query distribution than the baseline.
Problem

Research questions and friction points this paper is trying to address.

Optimize embedding layer bottlenecks in DLRM inference
Automate asymmetric data flow for multi-core SoC mapping
Speed up embedding lookups with tailored data flows
Innovation

Methods, ideas, or system contributions that make the work stand out.

Tailored data flows for embedding lookups
Automatic asymmetric table mapping to SoC cores
Optimized performance for unbalanced distributions
πŸ”Ž Similar Papers
No similar papers found.