FedSpaLLM: Federated Pruning of Large Language Models

📅 2024-10-18
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Structured pruning of large language models (LLMs) in privacy-sensitive federated settings faces concurrent challenges of data privacy preservation, system heterogeneity, and communication efficiency. Method: This work proposes the first federated learning framework for LLM structured pruning, wherein clients perform privacy-preserving, data-driven pruning locally—without uploading raw data. We introduce an ℓ₀-norm-based aggregation function to precisely fuse non-zero weights, coupled with adaptive mask expansion and hierarchical parameter sampling to enable controllable global sparsity and resource-aware pruning. Contribution/Results: Extensive experiments across diverse federated configurations demonstrate significant improvements in pruning accuracy and convergence stability, reduced communication overhead, and downstream task performance approaching that of centralized pruning baselines—while strictly preserving data privacy and accommodating heterogeneous client resources.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) achieve state-of-the-art performance but are challenging to deploy due to their high computational and storage demands. Pruning can reduce model size, yet existing methods assume public access to calibration data, which is impractical for privacy-sensitive applications. To address the challenge of pruning LLMs in privacy-preserving settings, we propose FedSpaLLM, the first federated learning framework designed specifically for pruning LLMs. FedSpaLLM enables clients to prune their models locally based on private data while accounting for system heterogeneity and maintaining communication efficiency. Our framework introduces several key innovations: (1) a novel $ell_0$-norm aggregation function that ensures only non-zero weights are averaged across clients, preserving important model parameters; (2) an adaptive mask expansion technique that meets global sparsity targets while accommodating client-specific pruning decisions; and (3) a layer sampling strategy that reduces communication overhead and personalizes the pruning process based on client resources. Extensive experiments show that FedSpaLLM improves pruning performance in diverse federated settings.
Problem

Research questions and friction points this paper is trying to address.

Federated pruning of Large Language Models
Privacy-preserving model size reduction
Efficient communication in heterogeneous systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Federated pruning for LLMs
Adaptive mask expansion technique
Layer sampling reduces communication
🔎 Similar Papers
No similar papers found.