🤖 AI Summary
In multi-tenant, large-scale LLM training platforms, performance bottlenecks are difficult to diagnose and resource waste is severe due to system black-boxness and synchronization complexity. Method: This paper proposes the first production-ready, non-intrusive black-box performance diagnosis framework. It reconstructs training timelines from low-level network flow data (error < 0.3%), and integrates distributed training behavior modeling, temporal pattern recognition, and lightweight real-time monitoring to automatically infer parallelization strategies and localize fine-grained performance issues. Contribution/Results: Moving beyond the platform provider’s limited viewpoint, the framework enables precise identification and root-cause attribution of common problems—including communication bottlenecks, load imbalance, and GPU idleness. Evaluated on Platform-X, it significantly improves diagnosis efficiency and resource utilization.
📝 Abstract
Large Language Models (LLMs) have brought about revolutionary changes in diverse fields, rendering LLM training of utmost importance for modern enterprises. To meet this demand, multi-tenant large-scale LLM training platforms have been built to offer LLM training services. Nevertheless, due to the complexity and synchronous nature of LLM training process, performance issues occur frequently and can result in substantial resource wastage. The limited visibility from the perspective of platform providers impedes existing profiling methods and poses challenges to the monitoring and diagnosis of the performance of LLM training jobs. For the first time, this paper proposes the utilization of underlying network flow data to reconstruct the training timelines of jobs based on the distinct characteristics in the LLM training procedure. We design LLMPrism, the first black-box performance diagnosis system for LLM training platforms. By progressively recognizing LLM training jobs, identifying their parallelism strategies, and reconstructing the training timelines, LLMPrism achieves non-intrusive, lightweight, and continuous monitoring of LLM training systems. Leveraging this monitoring capability, it further effectively diagnoses potential performance issues. Since Oct. 2024, LLMPrism has been deployed on our large-scale production Platform-X, in which the evaluations and deployment experiences demonstrate that LLMPrism can achieve accurate timeline reconstruction with an error within 0.3% and effectively diagnose various performance issues.