Rehearsal-Free Continual Federated Learning with Synergistic Synaptic Intelligence

📅 2024-12-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the dual challenges of catastrophic forgetting and data heterogeneity in continual federated learning (CFL), this paper proposes FedSSI—a sample-replay-free regularization method based on collaborative synaptic intelligence. FedSSI introduces the first synaptic importance estimation mechanism tailored to non-IID, dynamically evolving task streams in federated settings. It integrates task-incremental parameter importance updates with client-level elastic weight freezing, operating within the FedAvg framework to achieve privacy-preserving continual learning with low computational overhead and high generalization. Compared to state-of-the-art methods, FedSSI achieves average accuracy gains of 3.2–7.8% across multiple heterogeneous CFL benchmarks, reduces communication and memory overhead by 100%, and completely eliminates the need for historical data caching—thereby overcoming key applicability limitations of conventional Synaptic Intelligence in federated environments.

Technology Category

Application Category

📝 Abstract
Continual Federated Learning (CFL) allows distributed devices to collaboratively learn novel concepts from continuously shifting training data while avoiding knowledge forgetting of previously seen tasks. To tackle this challenge, most current CFL approaches rely on extensive rehearsal of previous data. Despite effectiveness, rehearsal comes at a cost to memory, and it may also violate data privacy. Considering these, we seek to apply regularization techniques to CFL by considering their cost-efficient properties that do not require sample caching or rehearsal. Specifically, we first apply traditional regularization techniques to CFL and observe that existing regularization techniques, especially synaptic intelligence, can achieve promising results under homogeneous data distribution but fail when the data is heterogeneous. Based on this observation, we propose a simple yet effective regularization algorithm for CFL named FedSSI, which tailors the synaptic intelligence for the CFL with heterogeneous data settings. FedSSI can not only reduce computational overhead without rehearsal but also address the data heterogeneity issue. Extensive experiments show that FedSSI achieves superior performance compared to state-of-the-art methods.
Problem

Research questions and friction points this paper is trying to address.

Avoid knowledge forgetting in continual federated learning
Reduce memory cost without data rehearsal
Address data heterogeneity in federated learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Regularization techniques without rehearsal
FedSSI algorithm for heterogeneous data
Synaptic intelligence tailored for CFL
🔎 Similar Papers
No similar papers found.
Y
Yichen Li
School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, China
Y
Yuying Wang
Soochow University, Suzhou, China
Haozhao Wang
Haozhao Wang
Huazhong University of Science and Technology
Could-edge Distributed LearningFederated LearningAI SecurityMulti-modal LLM Agent
Yining Qi
Yining Qi
Huazhong University of Science and Technology
federated learningdata securityprovable data possession
T
Tianzhe Xiao
School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, China
R
Ruixuan Li
School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, China