🤖 AI Summary
In the context of globally fragmented computing resources, large language model (LLM) training faces prohibitively high costs and strong centralization barriers. Method: We propose the first resource-driven, decentralized LLM training taxonomy—distinguishing community-driven and organization-driven paradigms—and rigorously delineate their conceptual boundaries with federated learning and distributed training via a unified classification framework. Our analysis integrates systematic literature review, conceptual comparison, and in-depth case studies to establish a multidimensional evaluation framework. Contribution/Results: This work delivers the first comprehensive survey of decentralized LLM training, identifying core technical challenges—including communication overhead, incentive compatibility, and security alignment—while synthesizing representative implementation pathways and outlining concrete future research directions. It provides both theoretical foundations and practical guidance for lowering LLM training barriers and advancing AI democratization.
📝 Abstract
The emergence of large language models (LLMs) has revolutionized AI development, yet their training demands computational resources beyond a single cluster or even datacenter, limiting accessibility to large organizations. Decentralized training has emerged as a promising paradigm to leverage dispersed resources across clusters, datacenters, and global regions, democratizing LLM development for broader communities. As the first comprehensive exploration of this emerging field, we present decentralized LLM training as a resource-driven paradigm and categorize it into community-driven and organizational approaches. Furthermore, our in-depth analysis clarifies decentralized LLM training, including: (1) position with related domain concepts comparison, (2) decentralized resource development trends, and (3) recent advances with discussion under a novel taxonomy. We also provide up-to-date case studies and explore future directions, contributing to the evolution of decentralized LLM training research.