๐ค AI Summary
This work addresses catastrophic forgetting and the lack of unified convergence guarantees in federated domain incremental learning, which arise from data distribution shifts and partial client participation. To tackle these challenges, the authors propose SPECIAL, an algorithm that introduces lightweight proximal anchors at the server to guide client updates toward alignment with historical global modelsโwithout requiring a replay buffer or task-specific classification heads, thereby enabling memory-free knowledge retention across tasks. SPECIAL is the first method to establish theoretical bounds for backward knowledge transfer under partial participation in federated domain incremental learning, explicitly disentangling the effects of optimization variance and inter-task drift. Built upon the FedAvg framework without altering communication protocols or model architecture, it achieves a non-convex convergence rate of $O((E/NT)^{1/2})$ and demonstrates superior empirical performance in mitigating forgetting compared to existing memory-free approaches.
๐ Abstract
Real-world federated systems seldom operate on static data: input distributions drift while privacy rules forbid raw-data sharing. We study this setting as Federated Domain-Incremental Learning (FDIL), where (i) clients are heterogeneous, (ii) tasks arrive sequentially with shifting domains, yet (iii) the label space remains fixed. Two theoretical pillars remain missing for FDIL under realistic deployment: a guarantee of backward knowledge transfer (BKT) and a convergence rate that holds across the sequence of all tasks with partial participation. We introduce SPECIAL (Server-Proximal Efficient Continual Aggregation for Learning), a simple, memory-free FDIL algorithm that adds a single server-side ``anchor''to vanilla FedAvg: in each round, the server nudges the uniformly sampled participated clients update toward the previous global model with a lightweight proximal term. This anchor curbs cumulative drift without replay buffers, synthetic data, or task-specific heads, keeping communication and model size unchanged. Our theory shows that SPECIAL (i) preserves earlier tasks: a BKT bound caps any increase in prior-task loss by a drift-controlled term that shrinks with more rounds, local epochs, and participating clients; and (ii) learns efficiently across all tasks: the first communication-efficient non-convex convergence rate for FDIL with partial participation, O((E/NT)^(1/2)), with E local epochs, T communication rounds, and N participated clients per round, matching single-task FedAvg while explicitly separating optimization variance from inter-task drift. Experimental results further demonstrate the effectiveness of SPECIAL.