๐ค AI Summary
Federated continual learning (FCL) on resource-constrained edge devices faces challenges under dynamic data evolution and distributional driftโincluding high storage overhead, reliance on manual annotation for task transitions, and inability to autonomously distinguish benign from adversarial tasks. Method: We propose a lightweight FCL framework featuring: (1) a novel encoder-decoder decoupled architecture that drastically reduces model storage footprint; (2) a contrastive-learning-based adaptive drift detection mechanism enabling unsupervised discrimination of task types (benign vs. adversarial) and automatic policy triggering; and (3) task-sensitive lightweight components to enhance incremental generalization. Results: Evaluated on CIFAR-100 and THUCNews across class-incremental and domain-incremental settings, the framework achieves over 40% reduction in storage overhead while maintaining robust performance. A deployable, edge-executable demonstration system has been implemented.
๐ Abstract
The proliferation of end devices has led to a distributed computing paradigm, wherein on-device machine learning models continuously process diverse data generated by these devices. The dynamic nature of this data, characterized by continuous changes or data drift, poses significant challenges for on-device models. To address this issue, continual learning (CL) is proposed, enabling machine learning models to incrementally update their knowledge and mitigate catastrophic forgetting. However, the traditional centralized approach to CL is unsuitable for end devices due to privacy and data volume concerns. In this context, federated continual learning (FCL) emerges as a promising solution, preserving user data locally while enhancing models through collaborative updates. Aiming at the challenges of limited storage resources for CL, poor autonomy in task shift detection, and difficulty in coping with new adversarial tasks in FCL scenario, we propose a novel FCL framework named SacFL. SacFL employs an Encoder-Decoder architecture to separate task-robust and task-sensitive components, significantly reducing storage demands by retaining lightweight task-sensitive components for resource-constrained end devices. Moreover, $
m{SacFL}$ leverages contrastive learning to introduce an autonomous data shift detection mechanism, enabling it to discern whether a new task has emerged and whether it is a benign task. This capability ultimately allows the device to autonomously trigger CL or attack defense strategy without additional information, which is more practical for end devices. Comprehensive experiments conducted on multiple text and image datasets, such as Cifar100 and THUCNews, have validated the effectiveness of $
m{SacFL}$ in both class-incremental and domain-incremental scenarios. Furthermore, a demo system has been developed to verify its practicality.