🤖 AI Summary
Packet-level discrete-event network simulation (PLDES) offers high accuracy for evaluating large model training performance but suffers from prohibitive runtime costs, and existing acceleration techniques struggle to balance fidelity and efficiency. This work proposes Wormhole, a novel simulation kernel that exploits repetitive contention patterns and steady-state behaviors inherent in distributed training traffic. Wormhole introduces the first user-transparent mechanism that automatically memoizes non-steady-state simulation states and skips redundant events during steady-state phases. Integrated with network partitioning, flow-rate-driven steady-state detection, and multithreaded parallelism within ns-3, Wormhole achieves up to 744× speedup (510× on MoE workloads) with under 1% error; with parallel execution, it reaches 1012× speedup, reducing the simulation time for GPT-13B on 128 GPUs from 9 hours to just 5 minutes.
📝 Abstract
Packet-level discrete-event simulation (PLDES) is a prevalent tool for evaluating detailed performance of large model training. Although PLDES offers high fidelity and generality, its slow performance has plagued networking practitioners. Existing optimization techniques either simplify the network model, resulting in large errors; or execute it in parallel using multiple processors, with an upper bound on speedup. This paper explores an alternative optimization direction that reduces the computational loads of PLDES while maintaining high fidelity. Our key insight is that, in distributed LLM training, packet-level traffic behaviors often exhibit repetitive contention patterns and steady-states where flow rates stabilize, ignoring these redundant discrete events speeds up the simulation considerably and the error is negligible. We realize this idea by proposing Wormhole, a user-transparent PLDES kernel capable of automatically memoization for unsteady-states and skipping for steady-states. Wormhole adopts network partitioning, state memoization and reuse, and rate-based steady-state identification to accurately determine the periods of each flow's steady-state, while maintaining simulation consistency after fast-forwarding. Experiments demonstrate that Wormhole can achieve a 744x speedup over the original ns-3 (510x for MoE workload), with a bounded error of<1%. Applying current multithreading parallel techniques and Wormhole together allows a 1012x speedup, reducing the simulation time for one GPT-13B training under 128 GPUs from 9 hours to 5 minutes.