SacFL: Self-Adaptive Federated Continual Learning for Resource-Constrained End Devices

๐Ÿ“… 2025-05-01
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Federated continual learning (FCL) on resource-constrained edge devices faces challenges under dynamic data evolution and distributional driftโ€”including high storage overhead, reliance on manual annotation for task transitions, and inability to autonomously distinguish benign from adversarial tasks. Method: We propose a lightweight FCL framework featuring: (1) a novel encoder-decoder decoupled architecture that drastically reduces model storage footprint; (2) a contrastive-learning-based adaptive drift detection mechanism enabling unsupervised discrimination of task types (benign vs. adversarial) and automatic policy triggering; and (3) task-sensitive lightweight components to enhance incremental generalization. Results: Evaluated on CIFAR-100 and THUCNews across class-incremental and domain-incremental settings, the framework achieves over 40% reduction in storage overhead while maintaining robust performance. A deployable, edge-executable demonstration system has been implemented.

Technology Category

Application Category

๐Ÿ“ Abstract
The proliferation of end devices has led to a distributed computing paradigm, wherein on-device machine learning models continuously process diverse data generated by these devices. The dynamic nature of this data, characterized by continuous changes or data drift, poses significant challenges for on-device models. To address this issue, continual learning (CL) is proposed, enabling machine learning models to incrementally update their knowledge and mitigate catastrophic forgetting. However, the traditional centralized approach to CL is unsuitable for end devices due to privacy and data volume concerns. In this context, federated continual learning (FCL) emerges as a promising solution, preserving user data locally while enhancing models through collaborative updates. Aiming at the challenges of limited storage resources for CL, poor autonomy in task shift detection, and difficulty in coping with new adversarial tasks in FCL scenario, we propose a novel FCL framework named SacFL. SacFL employs an Encoder-Decoder architecture to separate task-robust and task-sensitive components, significantly reducing storage demands by retaining lightweight task-sensitive components for resource-constrained end devices. Moreover, $ m{SacFL}$ leverages contrastive learning to introduce an autonomous data shift detection mechanism, enabling it to discern whether a new task has emerged and whether it is a benign task. This capability ultimately allows the device to autonomously trigger CL or attack defense strategy without additional information, which is more practical for end devices. Comprehensive experiments conducted on multiple text and image datasets, such as Cifar100 and THUCNews, have validated the effectiveness of $ m{SacFL}$ in both class-incremental and domain-incremental scenarios. Furthermore, a demo system has been developed to verify its practicality.
Problem

Research questions and friction points this paper is trying to address.

Addresses limited storage in federated continual learning
Improves autonomous task shift detection in FCL
Enhances adversarial task handling in resource-constrained devices
Innovation

Methods, ideas, or system contributions that make the work stand out.

Encoder-Decoder separates task components
Contrastive learning detects data shifts
Autonomous triggers for CL strategies
๐Ÿ”Ž Similar Papers
No similar papers found.
Zhengyi Zhong
Zhengyi Zhong
National University of Defense Technology
federated learningdomain adaptioncontinual learningmachine unlearning
W
Weidong Bao
Laboratory for Big Data and Decision, National University of Defense Technology, Changsha 410073, China
J
Ji Wang
Laboratory for Big Data and Decision, National University of Defense Technology, Changsha 410073, China
J
Jianguo Chen
School of Software Engineering, Sun Yat-sen University, China
Lingjuan Lyu
Lingjuan Lyu
Sony
Foundation ModelsFederated LearningResponsible AI
Wei Yang Bryan Lim
Wei Yang Bryan Lim
Assistant Professor, Nanyang Technological University (NTU), Singapore
Edge IntelligenceFederated LearningApplied AISustainable AI