🤖 AI Summary
To address inefficient collaboration and lack of fault tolerance in decentralized large language model (LLM) training—caused by dynamic node churn and unstable network conditions—this paper proposes the first crash-tolerant decentralized training framework. Our method introduces: (1) a communication optimization mechanism based on dynamic traffic routing to improve micro-batch transmission efficiency; (2) an asynchronous micro-batch scheduling protocol integrated with lightweight state snapshots, enabling seamless node join/leave operations; and (3) a heterogeneous-device co-training design compatible with both GPT-like and LLaMA-like architectures. We evaluate the framework on 10 geographically distributed, hardware-heterogeneous, and network-unstable real-world nodes. Results show up to 45% reduction in training time compared to state-of-the-art approaches, alongside significantly improved robustness and scalability.
📝 Abstract
Motivated by the emergence of large language models (LLMs) and the importance of democratizing their training, we propose GWTF, the first crash tolerant practical decentralized training framework for LLMs. Differently from existing distributed and federated training frameworks, GWTF enables the efficient collaborative training of a LLM on heterogeneous clients that volunteer their resources. In addition, GWTF addresses node churn, i.e., clients joining or leaving the system at any time, and network instabilities, i.e., network links becoming unstable or unreliable. The core of GWTF is a novel decentralized flow algorithm that finds the most effective routing that maximizes the number of microbatches trained with the lowest possible delay. We extensively evaluate GWTF on GPT-like and LLaMa-like models and compare it against the prior art. Our results indicate that GWTF reduces the training time by up to 45% in realistic and challenging scenarios that involve heterogeneous client nodes distributed over 10 different geographic locations with a high node churn rate.