🤖 AI Summary
Traditional SPMD (Single Program, Multiple Data) paradigms struggle to express and optimize heterogeneous parallelism in large-scale distributed training due to spatiotemporal load imbalance caused by hardware heterogeneity and dynamic data characteristics. Method: We propose HSPMD—an extended SPMD paradigm that supports asymmetric tensor sharding and composable hierarchical communication primitives, enabling unified modeling and automatic optimization of heterogeneous parallel strategies under single-device declarative programming. Key techniques include progressive computational graph specialization, dynamic graph switching, hierarchical communication scheduling, and heterogeneous SPMD annotation. Results: Experiments demonstrate that HSPMD matches or surpasses domain-specific systems across heterogeneous clusters, elastic training, and variable-length sequence scenarios—significantly improving flexibility, adaptability, and efficiency of large-model training.
📝 Abstract
The Single Program Multiple Data (SPMD) paradigm provides a unified abstraction to annotate various parallel dimensions in distributed deep learning (DL) training. With SPMD, users can write training programs from the viewpoint of a single device, and the system will automatically deduce the tensor sharding and communication patterns. However, with the recent development in large-scale DL models, distributed training exhibits spatial and temporal workload heterogeneity, arising from both device disparities (e.g., mixed hardware, failures) and data variations (e.g., uneven sequence lengths). Such heterogeneity violates SPMD's assumption of uniform workload partitioning, which restricts its ability to express and optimize heterogeneous parallel strategies effectively. To address this, we propose HSPMD within the Hetu v2 system to achieve general and scalable DL training. HSPMD extends SPMD's annotations to support asymmetric sharding and composes standard communication primitives for hierarchical communication, all while retaining the simplicity of a single-device declarative programming model. Leveraging HSPMD, Hetu handles spatial heterogeneity through progressive graph specialization, enabling device-specific execution logic, and addresses temporal heterogeneity via dynamic graph switching. Evaluations on heterogeneous clusters, elastic training, and mixed-length data scenarios show that HSPMD matches or outperforms specialized systems, providing a flexible and efficient solution for modern large-scale model training.