CG-FedLLM: How to Compress Gradients in Federated Fune-tuning for Large Language Models

๐Ÿ“… 2024-05-22
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 1
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address the prohibitively high gradient communication overhead in federated fine-tuning of large language models (LLMs), this paper proposes CG-FedLLM, an end-to-end gradient compression framework. Methodologically, it introduces Temporal Gradient-Aware Pretraining (TGAP)โ€”the first approach to identify salient gradient features via temporal integrationโ€”and designs a federated autoencoder architecture comprising client-side encoders and a server-side decoder, coupled with a Signal-to-Noise Ratio (SNR)-driven adaptive compression mechanism (FAF). This is the first work to incorporate a federated autoencoder into LLM federated fine-tuning, simultaneously ensuring privacy preservation, communication efficiency, and model performance. Experiments on the C-Eval benchmark demonstrate that CG-FedLLM achieves an average +3-point improvement over both centralized and conventional federated fine-tuning baselines, significantly reduces communication volume, and maintains high robustness and gradient SNR.

Technology Category

Application Category

๐Ÿ“ Abstract
The success of current Large-Language Models (LLMs) hinges on extensive training data that is collected and stored centrally, called Centralized Learning (CL). However, such a collection manner poses a privacy threat, and one potential solution is Federated Learning (FL), which transfers gradients, not raw data, among clients. Unlike traditional networks, FL for LLMs incurs significant communication costs due to their tremendous parameters. This study introduces an innovative approach to compress gradients to improve communication efficiency during LLM FL, formulating the new FL pipeline named CG-FedLLM. This approach integrates an encoder on the client side to acquire the compressed gradient features and a decoder on the server side to reconstruct the gradients. We also developed a novel training strategy that comprises Temporal-ensemble Gradient-Aware Pre-training (TGAP) to identify characteristic gradients of the target model and Federated AutoEncoder-Involved Fine-tuning (FAF) to compress gradients adaptively. Extensive experiments confirm that our approach reduces communication costs and improves performance (e.g., average 3 points increment compared with traditional CL- and FL-based fine-tuning with LlaMA on a well-recognized benchmark, C-Eval). This improvement is because our encoder-decoder, trained via TGAP and FAF, can filter gradients while selectively preserving critical features. Furthermore, we present a series of experimental analyses focusing on the signal-to-noise ratio, compression rate, and robustness within this privacy-centric framework, providing insight into developing more efficient and secure LLMs.
Problem

Research questions and friction points this paper is trying to address.

Compress gradients to reduce communication costs in federated learning
Develop encoder-decoder architecture for efficient gradient transmission
Maintain model performance while preserving privacy in LLM fine-tuning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Client-side encoder compresses gradient features
Server-side decoder reconstructs compressed gradients
TGAP and FAF training strategy filters gradients adaptively
๐Ÿ”Ž Similar Papers
No similar papers found.
Huiwen Wu
Huiwen Wu
Zhejiang Laboratory, Hangzhou, Zhejiang, China
Xiaohan Li
Xiaohan Li
Walmart Inc.
Data MiningRecommender systemMedical AI
D
Deyi Zhang
Zhejiang Laboratory, Hangzhou, Zhejiang, China
Xiaogang Xu
Xiaogang Xu
CUHK
Large ModelMulti-Modality AIAIGCGenerative PhotographyAI Security
J
Jiafei Wu
Zhejiang Laboratory, Hangzhou, Zhejiang, China
P
Puning Zhao
Z
Zhe Liu
Zhejiang Laboratory, Hangzhou, Zhejiang, China