PRISM: Probabilistic Runtime Insights and Scalable Performance Modeling for Large-Scale Distributed Training

📅 2025-10-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large-scale distributed training exhibits significant runtime performance volatility at the 10,000+ GPU scale: GPU time variability reaches 9% on 64k GPUs, and cross-platform performance discrepancies peak at 14%. To address this, we propose the first probabilistic performance modeling framework tailored for ultra-large-scale training, integrating GPU micro-benchmarks, statistical modeling, and real-system measurements—validated with a Kolmogorov–Smirnov distance of 20.8%. Our framework enables the first statistically grounded reliability quantification of training time, supports parallelization strategy sensitivity analysis, and facilitates variability-aware optimization. Empirical analysis identifies AllGather and ReduceScatter communication kernels as primary sources of performance variability; targeted optimizations yield up to 1.26× performance improvement potential. This work establishes a novel paradigm for co-optimizing stability and efficiency in 10,000+ GPU training systems.

Technology Category

Application Category

📝 Abstract
Large model training beyond tens of thousands of GPUs is an uncharted territory. At such scales, disruptions to the training process are not a matter of if, but a matter of when -- a stochastic process degrading training productivity. Dynamic runtime variation will become increasingly more frequent as training scales up and GPUs are operated in increasingly power-limited and thermally-stressed environments. At the 64k GPU scale, we already observed 9% GPU time variability for frontier foundation model training. To understand potential causes of variability, we analyze GPU microbenchmarks at scale across a variety of platforms, showing up to 14% variation in GPU performance on GEMM workloads depending on training hardware and deployed environment. Motivated by our analysis and the large design space around performance variability, we present PRISM -- a performance modeling framework that considers the stochastic nature of the large-scale distributed training. The core of PRISM is the statistical method that provides a quantifiable measure for probabilistic guarantees on training time. Using PRISM, we explore the design and optimization space of distributed training, from parallelization methods to next-generation training systems. PRISM is validated with real-system measurement, showing training time prediction accuracy with 20.8% Kolmogorov-Smirnov distance. Using PRISM, we demonstrate that, depending on computation node placement, up to 1.26x performance improvement potential is available if we factor in sensitivities of parallelization strategies to variation. In addition, we use PRISM to identify kernels to optimize for reducing performance variability and predict probability of slow-down for large-scale jobs where variation is magnified. We find optimizing communication kernels, such as AllGather and ReduceScatter, contribute most to minimizing variability in training step time.
Problem

Research questions and friction points this paper is trying to address.

Modeling stochastic performance variability in large-scale distributed training systems
Predicting training time with probabilistic guarantees under hardware variations
Optimizing parallelization strategies to mitigate GPU performance degradation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Probabilistic modeling framework for distributed training
Statistical method providing quantifiable training time guarantees
Optimizing communication kernels to reduce performance variability
🔎 Similar Papers
No similar papers found.