🤖 AI Summary
To address training interruptions caused by frequent hardware failures in large-scale distributed model training, this paper proposes a lightweight, time-series–aware fault-adaptive detection method. Targeting thousand-node–scale training environments, it introduces an end-to-end detection framework that jointly leverages multi-dimensional monitoring time-series analysis, anomaly pattern clustering, and dynamic thresholding—enabling, for the first time, fine-grained, low-latency fault预警 for distributed training tasks. Compared to manual inspection and generic anomaly detection approaches, our method achieves sub-second responsiveness (mean latency: 3.6 seconds), high-precision fault localization (precision: 0.904; F1-score: 0.893), and strong robustness under heterogeneous failure modes. Deployed in production for over one year, it has significantly improved training continuity and operational efficiency.
📝 Abstract
Large-scale distributed model training requires simultaneous training on up to thousands of machines. Faulty machine detection is critical when an unexpected fault occurs in a machine. From our experience, a training task can encounter two faults per day on average, possibly leading to a halt for hours. To address the drawbacks of the time-consuming and labor-intensive manual scrutiny, we propose Minder, an automatic faulty machine detector for distributed training tasks. The key idea of Minder is to automatically and efficiently detect faulty distinctive monitoring metric patterns, which could last for a period before the entire training task comes to a halt. Minder has been deployed in our production environment for over one year, monitoring daily distributed training tasks where each involves up to thousands of machines. In our real-world fault detection scenarios, Minder can accurately and efficiently react to faults within 3.6 seconds on average, with a precision of 0.904 and F1-score of 0.893.