Two-dimensional Sparse Parallelism for Large Scale Deep Learning Recommendation Model Training

📅 2025-08-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address scalability bottlenecks—including memory explosion, high inter-device communication overhead, and load imbalance—caused by sparse embedding tables in large-scale recommendation models, this paper proposes 2D-Sparse Parallel: a hybrid parallelism architecture that jointly partitions embedding tables along both rows and columns by integrating model parallelism with data parallelism. We design Momentum-Scaled Row-wise AdaGrad, an adaptive optimizer that scales per-row momentum to reduce activation memory peaks and mitigate gradient update skew. Furthermore, we optimize the All-to-All communication pattern to minimize cross-device embedding lookup latency. Evaluated on a 4096-GPU cluster, our approach achieves near-linear weak scaling (92.5% efficiency), improves training throughput by 3.1×, reduces memory footprint by 37%, and preserves model accuracy—establishing a new state-of-the-art for distributed training of recommendation systems.

Technology Category

Application Category

📝 Abstract
The increasing complexity of deep learning recommendation models (DLRM) has led to a growing need for large-scale distributed systems that can efficiently train vast amounts of data. In DLRM, the sparse embedding table is a crucial component for managing sparse categorical features. Typically, these tables in industrial DLRMs contain trillions of parameters, necessitating model parallelism strategies to address memory constraints. However, as training systems expand with massive GPUs, the traditional fully parallelism strategies for embedding table post significant scalability challenges, including imbalance and straggler issues, intensive lookup communication, and heavy embedding activation memory. To overcome these limitations, we propose a novel two-dimensional sparse parallelism approach. Rather than fully sharding tables across all GPUs, our solution introduces data parallelism on top of model parallelism. This enables efficient all-to-all communication and reduces peak memory consumption. Additionally, we have developed the momentum-scaled row-wise AdaGrad algorithm to mitigate performance losses associated with the shift in training paradigms. Our extensive experiments demonstrate that the proposed approach significantly enhances training efficiency while maintaining model performance parity. It achieves nearly linear training speed scaling up to 4K GPUs, setting a new state-of-the-art benchmark for recommendation model training.
Problem

Research questions and friction points this paper is trying to address.

Addresses scalability challenges in large-scale DLRM training
Reduces memory and communication costs in embedding tables
Improves training efficiency while maintaining model performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Two-dimensional sparse parallelism for DLRM
Data parallelism atop model parallelism
Momentum-scaled row-wise AdaGrad algorithm
🔎 Similar Papers
No similar papers found.
X
Xin Zhang
Meta, Inc.
Q
Quanyu Zhu
Meta, Inc.
L
Liangbei Xu
Meta, Inc.
Z
Zain Huda
Meta, Inc.
Wang Zhou
Wang Zhou
Sun Yat-Sen University
J
Jin Fang
Meta, Inc.
D
Dennis van der Staay
Meta, Inc.
Yuxi Hu
Yuxi Hu
Graz University of Technology
Computer Vision3D Reconstruction
J
Jade Nie
Meta, Inc.
Jiyan Yang
Jiyan Yang
Stanford University
C
Chunzhi Yang
Meta, Inc.