ParaBlock: Communication-Computation Parallel Block Coordinate Federated Learning for Large Language Models

📅 2025-11-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the low training efficiency in federated learning of large language models (LLMs) caused by massive module parameters and high communication overhead, this paper proposes ParaBlock. The method introduces a dual-threaded parallelism mechanism—integrating computation and communication—within a federated block coordinate descent framework. It further incorporates asynchronous gradient transmission and local modular parameter updates, enabling overlapping computation and communication over parameter blocks within a single training round. Theoretically and empirically, ParaBlock maintains identical convergence rates and model performance on instruction-following and mathematical reasoning tasks, while significantly reducing communication latency and bandwidth consumption. This makes it particularly suitable for collaborative training of large-scale LLMs across resource-constrained clients.

Technology Category

Application Category

📝 Abstract
Federated learning (FL) has been extensively studied as a privacy-preserving training paradigm. Recently, federated block coordinate descent scheme has become a popular option in training large-scale models, as it allows clients to train only a subset of the model locally instead of the entire model. However, in the era of large language models (LLMs), even a single block can contain a significant number of parameters, posing substantial communication latency, particularly for resource-constrained clients. To address this challenge in federated training/fine-tuning LLMs, we propose ParaBlock, a novel approach that establishes two parallel threads for communication and computation to enhance communication efficiency. We theoretically prove that the proposed ParaBlock achieves the same convergence rate as the standard federated block coordinate descent methods. Empirical evaluations on fine-tuning LLMs on general instruction following and mathematical reasoning confirm that ParaBlock not only maintains strong performance but also significantly improves communication efficiency.
Problem

Research questions and friction points this paper is trying to address.

Addresses high communication latency in federated learning for large language models
Enhances communication efficiency via parallel communication-computation threads
Maintains model performance while reducing resource constraints during fine-tuning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Parallel communication-computation threads enhance efficiency
Federated block coordinate descent for large models
Maintains convergence rate while reducing communication latency
🔎 Similar Papers