SeedFlood: A Step Toward Scalable Decentralized Training of LLMs

📅 2026-02-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of escalating communication overhead and the difficulty of achieving global consistency in decentralized large language model (LLM) training. To this end, the authors propose SeedFlood, a novel method that introduces a seed-reconfigurable mechanism into decentralized optimization. By leveraging the reproducibility of random seeds in zeroth-order optimization, SeedFlood compresses communication messages to near-zero size and efficiently disseminates them across the network via a flooding protocol, thereby decoupling communication cost from model parameter count. Evaluated on fine-tuning tasks with hundred-billion-parameter LLMs, SeedFlood substantially outperforms conventional gossip-based baselines in both communication efficiency and generalization performance, achieving results comparable to first-order optimization methods.

Technology Category

Application Category

📝 Abstract
This work presents a new approach to decentralized training-SeedFlood-designed to scale for large models across complex network topologies and achieve global consensus with minimal communication overhead. Traditional gossip-based methods suffer from message communication costs that grow with model size, while information decay over network hops renders global consensus inefficient. SeedFlood departs from these practices by exploiting the seed-reconstructible structure of zeroth-order updates and effectively making the messages near-zero in size, allowing them to be flooded to every client in the network. This mechanism makes communication overhead negligible and independent of model size, removing the primary scalability bottleneck in decentralized training. Consequently, SeedFlood enables training in regimes previously considered impractical, such as billion-parameter models distributed across hundreds of clients. Our experiments on decentralized LLM fine-tuning demonstrate thatSeedFlood consistently outperforms gossip-based baselines in both generalization performance and communication efficiency, and even achieves results comparable to first-order methods in large scale settings.
Problem

Research questions and friction points this paper is trying to address.

decentralized training
large language models
communication overhead
global consensus
scalability
Innovation

Methods, ideas, or system contributions that make the work stand out.

SeedFlood
decentralized training
zeroth-order optimization
communication efficiency
large language models
🔎 Similar Papers
No similar papers found.
J
Jihun Kim
POSTECH, Pohang, Republic of Korea
Namhoon Lee
Namhoon Lee
POSTECH
machine learning