🤖 AI Summary
Existing prompt-based federated continual learning (FCL) suffers from insufficient inter-class knowledge coherence: intra-class distribution shifts across clients degrade prompt semantic consistency, while inter-class correlations induce cross-class confusion, exacerbating spatio-temporal forgetting. This work is the first to systematically identify and address this issue, proposing a class-aware client knowledge interaction mechanism comprising: (1) local class-distribution compensation to mitigate intra-class shift, and (2) class-aware prompt aggregation to suppress inter-class interference. The method unifies prompt learning, federated optimization, distribution alignment, and selective knowledge aggregation, enabling joint optimization of local compensation and global coordination. Evaluated on multiple FCL benchmarks, it achieves state-of-the-art performance, significantly alleviating spatio-temporal forgetting while enhancing model generalization and stability.
📝 Abstract
Federated continual learning (FCL) tackles scenarios of learning from continuously emerging task data across distributed clients, where the key challenge lies in addressing both temporal forgetting over time and spatial forgetting simultaneously. Recently, prompt-based FCL methods have shown advanced performance through task-wise prompt communication.In this study, we underscore that the existing prompt-based FCL methods are prone to class-wise knowledge coherence between prompts across clients. The class-wise knowledge coherence includes two aspects: (1) intra-class distribution gap across clients, which degrades the learned semantics across prompts, (2) inter-prompt class-wise relevance, which highlights cross-class knowledge confusion. During prompt communication, insufficient class-wise coherence exacerbates knowledge conflicts among new prompts and induces interference with old prompts, intensifying both spatial and temporal forgetting. To address these issues, we propose a novel Class-aware Client Knowledge Interaction (C${}^2$Prompt) method that explicitly enhances class-wise knowledge coherence during prompt communication. Specifically, a local class distribution compensation mechanism (LCDC) is introduced to reduce intra-class distribution disparities across clients, thereby reinforcing intra-class knowledge consistency. Additionally, a class-aware prompt aggregation scheme (CPA) is designed to alleviate inter-class knowledge confusion by selectively strengthening class-relevant knowledge aggregation. Extensive experiments on multiple FCL benchmarks demonstrate that C${}^2$Prompt achieves state-of-the-art performance. Our source code is available at https://github.com/zhoujiahuan1991/NeurIPS2025-C2Prompt