🤖 AI Summary
To address statistical bias and noise interference arising from client data/task heterogeneity—or even adversarial behavior—in federated continual learning (FCL), this paper introduces the “Accurate Forgetting” (AF) paradigm: proactively identifying and discarding unreliable feature representations induced by skewed distributions and noise prior to knowledge reuse. Methodologically, we propose the first probability-based credibility assessment framework built upon normalized flows, enabling quantifiable, knowledge-granular filtering. Further, we integrate generative replay with selective knowledge inheritance to dynamically enhance global model robustness within the federated architecture. Evaluated on multiple heterogeneous FCL benchmarks, AF achieves an average accuracy improvement of 12.3%, significantly boosting generalization and noise resilience. Our approach provides a novel, interpretable, and computationally tractable pathway for bias mitigation in FCL.
📝 Abstract
Recent years have witnessed a burgeoning interest in federated learning (FL). However, the contexts in which clients engage in sequential learning remain under-explored. Bridging FL and continual learning (CL) gives rise to a challenging practical problem: federated continual learning (FCL). Existing research in FCL primarily focuses on mitigating the catastrophic forgetting issue of continual learning while collaborating with other clients. We argue that the forgetting phenomena are not invariably detrimental. In this paper, we consider a more practical and challenging FCL setting characterized by potentially unrelated or even antagonistic data/tasks across different clients. In the FL scenario, statistical heterogeneity and data noise among clients may exhibit spurious correlations which result in biased feature learning. While existing CL strategies focus on a complete utilization of previous knowledge, we found that forgetting biased information is beneficial in our study. Therefore, we propose a new concept accurate forgetting (AF) and develop a novel generative-replay method~method~which selectively utilizes previous knowledge in federated networks. We employ a probabilistic framework based on a normalizing flow model to quantify the credibility of previous knowledge. Comprehensive experiments affirm the superiority of our method over baselines.