PerfTracker: Online Performance Troubleshooting for Large-scale Model Training in Production

📅 2025-06-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large-scale model training (LMT) on GPU clusters with tens of thousands of devices faces significant challenges in performance failure diagnosis—including low observability, high overhead, and slow root-cause localization—while existing approaches struggle to achieve fine-grained, low-overhead, cross-stack attribution. This paper introduces the first online, fine-grained performance diagnosis system tailored for LMT. It innovatively incorporates a differential observability mechanism, enabling runtime dynamic instrumentation, lightweight online analysis, and multi-level metric correlation modeling across the full software-hardware stack (Python, CUDA, and GPU interconnects). The system achieves precise, end-to-end bottleneck identification with minimal perturbation. Deployed in production clusters exceeding 10,000 GPUs, it reduces average root-cause localization time to minutes and incurs less than 3% runtime overhead. This substantially enhances observability and operational efficiency for large-scale LMT systems.

Technology Category

Application Category

📝 Abstract
Troubleshooting performance problems of large model training (LMT) is immensely challenging, due to unprecedented scales of modern GPU clusters, the complexity of software-hardware interactions, and the data intensity of the training process. Existing troubleshooting approaches designed for traditional distributed systems or datacenter networks fall short and can hardly apply to real-world training systems. In this paper, we present PerfTracker, the first online troubleshooting system utilizing fine-grained profiling, to diagnose performance issues of large-scale model training in production. PerfTracker can diagnose performance issues rooted in both hardware (e.g., GPUs and their interconnects) and software (e.g., Python functions and GPU operations). It scales to LMT on modern GPU clusters. PerfTracker effectively summarizes runtime behavior patterns of fine-grained LMT functions via online profiling, and leverages differential observability to localize the root cause with minimal production impact. PerfTracker has been deployed as a production service for large-scale GPU clusters of O(10, 000) GPUs (product homepage https://help.aliyun.com/zh/pai/user-guide/perftracker-online-performance-analysis-diagnostic-tool). It has been used to diagnose a variety of difficult performance issues.
Problem

Research questions and friction points this paper is trying to address.

Diagnose performance issues in large-scale model training
Scale to modern GPU clusters with fine-grained profiling
Localize root causes with minimal production impact
Innovation

Methods, ideas, or system contributions that make the work stand out.

Online fine-grained profiling for LMT
Hardware-software root cause diagnosis
Scalable to O(10,000) GPU clusters
🔎 Similar Papers
2024-06-07International Symposium on High-Performance Computer ArchitectureCitations: 5
Y
Yu Guan
Alibaba Cloud
Z
Zhiyu Yin
Alibaba Cloud
H
Haoyu Chen
Alibaba Cloud
S
Sheng Cheng
Alibaba Cloud
C
Chaojie Yang
Alibaba Cloud
Tianyin Xu
Tianyin Xu
University of Illinois at Urbana-Champaign
Software/system reliabilityOperating systemsDistributed systemsSoftware engineering
Y
Yang Zhang
Alibaba Cloud
Hanyu Zhao
Hanyu Zhao
Alibaba Group
Distributed SystemsSystems for AI
Y
Yong Li
Alibaba Cloud
D
Dennis Cai
Alibaba Cloud
Ennan Zhai
Ennan Zhai
Alibaba Group
Computer NetworksSecurityProgramming LanguageCloud Computing