Optimizing Federated Learning in the Era of LLMs: Message Quantization and Streaming

📅 2025-11-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the substantial communication overhead and constrained local memory in federated learning (FL) of large language models (LLMs), this paper proposes a co-optimization mechanism integrating low-bit quantization of gradient/model updates with containerized streaming transmission for large parameter objects. Implemented atop NVIDIA FLARE, the method enables incremental parameter updates and memory-aware flow-controlled scheduling, ensuring backward compatibility without modifying existing training pipelines. Experimental evaluation demonstrates that the approach reduces communication volume by up to 62% and peak memory consumption by 48%, significantly enhancing training efficiency and system stability. The proposed architecture provides a scalable, lightweight communication solution tailored for practical federated training of billion-parameter LLMs.

Technology Category

Application Category

📝 Abstract
Federated Learning (FL) offers a promising solution for training machine learning models across distributed data sources while preserving data privacy. However, FL faces critical challenges related to communication overhead and local resource constraints, especially in the era of Large Language Models (LLMs) with billions of parameters. The sheer size of these models exacerbates both memory and communication constraints, making efficient transmission and processing essential for practical deployment. NVIDIA FLARE, an open-source SDK for federated learning, addresses these challenges by introducing advanced communication capabilities. Building upon existing solutions for large object streaming, we enhance FL workflows for LLMs through two key techniques: message quantization and container/file streaming. Quantization reduces message size, while streaming enables efficient memory management, improving scalability and integration with existing workflows. These advancements significantly enhance the robustness and efficiency of FL with LLMs, ensuring better performance in real-world federated learning scenarios.
Problem

Research questions and friction points this paper is trying to address.

Optimizing communication overhead in federated learning for large language models
Addressing memory constraints during LLM training across distributed data sources
Enhancing scalability and efficiency of federated learning workflows with quantization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Message quantization reduces communication overhead
Container streaming enables efficient memory management
Enhances federated learning scalability for LLMs
🔎 Similar Papers
No similar papers found.