🤖 AI Summary
To address the substantial communication overhead and constrained local memory in federated learning (FL) of large language models (LLMs), this paper proposes a co-optimization mechanism integrating low-bit quantization of gradient/model updates with containerized streaming transmission for large parameter objects. Implemented atop NVIDIA FLARE, the method enables incremental parameter updates and memory-aware flow-controlled scheduling, ensuring backward compatibility without modifying existing training pipelines. Experimental evaluation demonstrates that the approach reduces communication volume by up to 62% and peak memory consumption by 48%, significantly enhancing training efficiency and system stability. The proposed architecture provides a scalable, lightweight communication solution tailored for practical federated training of billion-parameter LLMs.
📝 Abstract
Federated Learning (FL) offers a promising solution for training machine learning models across distributed data sources while preserving data privacy. However, FL faces critical challenges related to communication overhead and local resource constraints, especially in the era of Large Language Models (LLMs) with billions of parameters. The sheer size of these models exacerbates both memory and communication constraints, making efficient transmission and processing essential for practical deployment. NVIDIA FLARE, an open-source SDK for federated learning, addresses these challenges by introducing advanced communication capabilities. Building upon existing solutions for large object streaming, we enhance FL workflows for LLMs through two key techniques: message quantization and container/file streaming. Quantization reduces message size, while streaming enables efficient memory management, improving scalability and integration with existing workflows. These advancements significantly enhance the robustness and efficiency of FL with LLMs, ensuring better performance in real-world federated learning scenarios.