LLMPrism: Black-box Performance Diagnosis for Production LLM Training Platforms

📅 2025-05-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In multi-tenant, large-scale LLM training platforms, performance bottlenecks are difficult to diagnose and resource waste is severe due to system black-boxness and synchronization complexity. Method: This paper proposes the first production-ready, non-intrusive black-box performance diagnosis framework. It reconstructs training timelines from low-level network flow data (error < 0.3%), and integrates distributed training behavior modeling, temporal pattern recognition, and lightweight real-time monitoring to automatically infer parallelization strategies and localize fine-grained performance issues. Contribution/Results: Moving beyond the platform provider’s limited viewpoint, the framework enables precise identification and root-cause attribution of common problems—including communication bottlenecks, load imbalance, and GPU idleness. Evaluated on Platform-X, it significantly improves diagnosis efficiency and resource utilization.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have brought about revolutionary changes in diverse fields, rendering LLM training of utmost importance for modern enterprises. To meet this demand, multi-tenant large-scale LLM training platforms have been built to offer LLM training services. Nevertheless, due to the complexity and synchronous nature of LLM training process, performance issues occur frequently and can result in substantial resource wastage. The limited visibility from the perspective of platform providers impedes existing profiling methods and poses challenges to the monitoring and diagnosis of the performance of LLM training jobs. For the first time, this paper proposes the utilization of underlying network flow data to reconstruct the training timelines of jobs based on the distinct characteristics in the LLM training procedure. We design LLMPrism, the first black-box performance diagnosis system for LLM training platforms. By progressively recognizing LLM training jobs, identifying their parallelism strategies, and reconstructing the training timelines, LLMPrism achieves non-intrusive, lightweight, and continuous monitoring of LLM training systems. Leveraging this monitoring capability, it further effectively diagnoses potential performance issues. Since Oct. 2024, LLMPrism has been deployed on our large-scale production Platform-X, in which the evaluations and deployment experiences demonstrate that LLMPrism can achieve accurate timeline reconstruction with an error within 0.3% and effectively diagnose various performance issues.
Problem

Research questions and friction points this paper is trying to address.

Diagnosing performance issues in multi-tenant LLM training platforms
Reconstructing training timelines using network flow data
Providing non-intrusive monitoring for LLM training systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses network flow data for timeline reconstruction
Black-box diagnosis via parallelism strategy recognition
Non-intrusive lightweight continuous monitoring system
🔎 Similar Papers
No similar papers found.
Z
Zhihan Jiang
The Chinese University of Hong Kong, Hong Kong SAR, China
R
Rui Ren
Computing and Networking Innovation Lab, Huawei Cloud Computing Technology Co., Ltd, China
Guangba Yu
Guangba Yu
Postdoc, The Chinese University of Hong Kong
Cloud ComputingLLMOpsAIOpsDistributed SystemsChaos engineering
Y
Yulun Wu
The Chinese University of Hong Kong, Hong Kong SAR, China
Wenwei Gu
Wenwei Gu
Assistant Professor, Nankai University
Software EngineeringReliability EngineeringAIOpsTime Series Analysis
Y
Yichen Li
The Chinese University of Hong Kong, Hong Kong SAR, China
Yujie Huang
Yujie Huang
NUAA: Nanjing University of Aeronautics and Astronautics
计算机视觉、多目行人跟踪、轨迹预测
C
Cong Feng
Computing and Networking Innovation Lab, Huawei Cloud Computing Technology Co., Ltd, China
Z
Zengyin Yang
Computing and Networking Innovation Lab, Huawei Cloud Computing Technology Co., Ltd, China
Yongqiang Yang
Yongqiang Yang
Huawei Cloud
云网络、分布式系统
Michael R. Lyu
Michael R. Lyu
Professor of Computer Science & Engineering, The Chinese University of Hong Kong
software engineeringsoftware reliabilityfault tolerancemachine learningdistributed systems