Hetu v2: A General and Scalable Deep Learning System with Hierarchical and Heterogeneous Single Program Multiple Data Annotations

📅 2025-04-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional SPMD (Single Program, Multiple Data) paradigms struggle to express and optimize heterogeneous parallelism in large-scale distributed training due to spatiotemporal load imbalance caused by hardware heterogeneity and dynamic data characteristics. Method: We propose HSPMD—an extended SPMD paradigm that supports asymmetric tensor sharding and composable hierarchical communication primitives, enabling unified modeling and automatic optimization of heterogeneous parallel strategies under single-device declarative programming. Key techniques include progressive computational graph specialization, dynamic graph switching, hierarchical communication scheduling, and heterogeneous SPMD annotation. Results: Experiments demonstrate that HSPMD matches or surpasses domain-specific systems across heterogeneous clusters, elastic training, and variable-length sequence scenarios—significantly improving flexibility, adaptability, and efficiency of large-model training.

Technology Category

Application Category

📝 Abstract
The Single Program Multiple Data (SPMD) paradigm provides a unified abstraction to annotate various parallel dimensions in distributed deep learning (DL) training. With SPMD, users can write training programs from the viewpoint of a single device, and the system will automatically deduce the tensor sharding and communication patterns. However, with the recent development in large-scale DL models, distributed training exhibits spatial and temporal workload heterogeneity, arising from both device disparities (e.g., mixed hardware, failures) and data variations (e.g., uneven sequence lengths). Such heterogeneity violates SPMD's assumption of uniform workload partitioning, which restricts its ability to express and optimize heterogeneous parallel strategies effectively. To address this, we propose HSPMD within the Hetu v2 system to achieve general and scalable DL training. HSPMD extends SPMD's annotations to support asymmetric sharding and composes standard communication primitives for hierarchical communication, all while retaining the simplicity of a single-device declarative programming model. Leveraging HSPMD, Hetu handles spatial heterogeneity through progressive graph specialization, enabling device-specific execution logic, and addresses temporal heterogeneity via dynamic graph switching. Evaluations on heterogeneous clusters, elastic training, and mixed-length data scenarios show that HSPMD matches or outperforms specialized systems, providing a flexible and efficient solution for modern large-scale model training.
Problem

Research questions and friction points this paper is trying to address.

Addresses workload heterogeneity in distributed deep learning training
Extends SPMD to support asymmetric sharding and hierarchical communication
Enables flexible and efficient large-scale model training with HSPMD
Innovation

Methods, ideas, or system contributions that make the work stand out.

Extends SPMD with asymmetric sharding support
Uses progressive graph specialization for spatial heterogeneity
Implements dynamic graph switching for temporal heterogeneity
🔎 Similar Papers
No similar papers found.
H
Haoyang Li
The Hetu Team @ Peking University
Fangcheng Fu
Fangcheng Fu
Shanghai Jiao Tong University
machine learningdeep learningMLSysdistributed computation
H
Hao Ge
The Hetu Team @ Peking University
S
Sheng Lin
The Hetu Team @ Peking University
X
Xuanyu Wang
The Hetu Team @ Peking University
J
Jiawen Niu
The Hetu Team @ Peking University
Xupeng Miao
Xupeng Miao
Purdue University
Machine Learning SystemsData Management
B
Bin Cui
The Hetu Team @ Peking University