Adaptive Hybrid Model Pruning in Federated Learning through Loss Exploration

📅 2024-05-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high communication overhead, client computational constraints, and slow convergence with degraded accuracy in federated learning—particularly under complex models and non-IID data—this paper proposes AutoFLIP. Methodologically, AutoFLIP introduces a novel knowledge distillation–driven pruning mechanism that models client-wise loss topology and gradient dynamics, enabling adaptive co-optimization of structured and unstructured pruning to automatically identify and compress low-contribution substructures. Crucially, lightweight pruning decisions are made locally during the federated loss exploration phase, requiring no additional communication or global model download. Experimental results demonstrate that AutoFLIP reduces average client computation cost by 48.8% and communication cost by 35.5%, accelerates global convergence, and improves both final model accuracy and distributional robustness.

Technology Category

Application Category

📝 Abstract
The rapid proliferation of smart devices coupled with the advent of 6G networks has profoundly reshaped the domain of collaborative machine learning. Alongside growing privacy-security concerns in sensitive fields, these developments have positioned federated learning (FL) as a pivotal technology for decentralized model training. Despite its vast potential, specially in the age of complex foundation models, FL encounters challenges such as elevated communication costs, computational constraints, and the complexities of non-IID data distributions. We introduce AutoFLIP, an innovative approach that utilizes a federated loss exploration phase to drive adaptive hybrid pruning, operating in a structured and unstructured way. This innovative mechanism automatically identifies and prunes model substructure by distilling knowledge on model gradients behavior across different non-IID client losses topology, thereby optimizing computational efficiency and enhancing model performance on resource constrained scenarios. Extensive experiments on various datasets and FL tasks reveal that AutoFLIP not only efficiently accelerates global convergence, but also achieves superior accuracy and robustness compared to traditional methods. On average, AutoFLIP reduces computational overhead by 48.8% and communication costs by 35.5%, while improving global accuracy. By significantly reducing these overheads, AutoFLIP offer the way for efficient FL deployment in real-world applications for a scalable and broad applicability.
Problem

Research questions and friction points this paper is trying to address.

Reduces communication costs in federated learning
Optimizes model pruning for heterogeneous data distributions
Improves computational efficiency in resource-limited devices
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive hybrid pruning for federated learning optimization
Federated loss exploration to identify pruning substructures
Reduces computational and communication costs significantly
🔎 Similar Papers
No similar papers found.
Christian Internò
Christian Internò
PhD Student, Bielefeld University, Honda Research Institute Europe
Representation LearningMachine LearningDistributed LearningAI Safety
E
E. Raponi
LIACS, Leiden University, Leiden, Netherlands
N
N. V. Stein
LIACS, Leiden University, Leiden, Netherlands
T
T. Back
LIACS, Leiden University, Leiden, Netherlands
M
M. Olhofer
Honda Research Institute EU, Offenbach, Germany
Y
Yaochu Jin
Westlake University, Hangzhou, Zhejiang, China
Barbara Hammer
Barbara Hammer
Professor, Bielefeld University
machine learningdata miningneural networksbioinformaticstheoretical computer science