🤖 AI Summary
Existing distributed deep learning systems lack joint optimization across parallelization strategies, device memory, and network topology, resulting in high communication overhead, poor memory efficiency, and limited scalability. This work proposes NEST, a framework that, for the first time, unifies multidimensional parallelism—including tensor, pipeline, data, and expert parallelism—together with network latency and memory constraints into a structured dynamic programming formulation. NEST co-optimizes device placement over the operator graph by explicitly modeling AllReduce communication delays and incorporating detailed profiling of computational and memory resources. Evaluated across diverse hardware and network environments, NEST achieves up to 2.43× higher throughput while significantly improving memory efficiency and system scalability.
📝 Abstract
The growing scale of deep learning demands distributed training frameworks that jointly reason about parallelism, memory, and network topology. Prior works often rely on heuristic or topology-agnostic search, handling communication and memory separately. Without per-device memory awareness, these methods typically ensure feasibility post hoc by sharding parameters and activations across many devices, increasing synchronization, inflating communication, and underutilizing compute-limiting scalability and efficiency on real datacenter networks. We present NEST, a network-, compute-, and memory-aware device placement framework that unifies model parallelism, topology modeling, and memory feasibility via structured dynamic programming. NEST's DP operates on operator graphs with tensor and expert parallel configurations, explicit allreduce latencies across hierarchical or arbitrary networks, and memory/compute profiles. By factoring parallelism across tensor, pipeline, data, and expert dimensions, NEST defines a principled search space for hybrid strategies while jointly optimizing co-location, network latency, and memory feasibility. Evaluations across diverse hardware and networks show NEST achieves up to 2.43 times higher throughput, better memory efficiency, and improved scalability over state-of-the-art baselines, providing a foundation for co-designing parallelization strategies and datacenter interconnects for next-generation AI infrastructure. The source code of NEST is available at: https://github.com/scai-tech/Nest