🤖 AI Summary
Frequent and hard-to-diagnose failures during large language model (LLM) training on thousand-GPU clusters hinder training stability and operational efficiency.
Method: This paper proposes the first full-stack (hardware and software) real-time diagnosis framework, innovatively integrating intra-kernel fine-grained tracing with multi-dimensional temporal metric aggregation to overcome the limitations of conventional point-solution tools. It employs a lightweight tracing daemon and a distributed real-time monitoring architecture to achieve low-overhead, high-precision anomaly detection.
Contribution/Results: Deployed continuously for eight months on a 6,000-GPU cluster, the framework reduces mean time to failure diagnosis by 72%, significantly improving large-scale LLM training stability and system observability. Its design enables scalable, production-grade fault localization across heterogeneous GPU infrastructure without compromising training performance.
📝 Abstract
The rapid proliferation of large language models has driven the need for efficient GPU training clusters. However, ensuring high-performance training in these clusters is challenging due to the complexity of software-hardware interactions and the frequent occurrence of training anomalies. Since existing diagnostic tools are narrowly tailored to specific issues, there are gaps in their ability to address anomalies spanning the entire training stack. In response, we introduce XPUTimer, a real-time diagnostic framework designed for distributed LLM training at scale. XPUTimer first integrates a lightweight tracing daemon to monitor key code segments with minimal overhead. Additionally, it features a diagnostic engine that employs novel intra-kernel tracing and holistic aggregated metrics to efficiently identify and resolve anomalies. Deployment of XPUTimer across 6,000 GPUs over eight months demonstrated significant improvements across the training stack, validating its effectiveness in real-world scenarios.