Federated Class-Incremental Learning with Hierarchical Generative Prototypes

📅 2024-06-04
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
In federated continual learning (FCL), two intertwined biases—incremental bias (favoring newly introduced classes) and federated bias (favoring locally dominant classes)—induce classifier imbalance and degrade generalization. This work proposes, for the first time, a joint constraint mechanism to simultaneously mitigate both biases. We introduce hierarchical generative prototypes to enable global predictive calibration, departing from conventional parameter-aggregation paradigms. Our framework further integrates learnable prompt tuning atop pretrained backbones, hierarchical prototype modeling, and federated-level class-balanced knowledge distillation. Evaluated on standard FCL benchmarks, our method achieves an average accuracy improvement of 7.8% over state-of-the-art approaches. It establishes a novel paradigm for privacy-preserving collaborative learning under dynamic data distributions.

Technology Category

Application Category

📝 Abstract
Federated Learning (FL) aims at unburdening the training of deep models by distributing computation across multiple devices (clients) while safeguarding data privacy. On top of that, Federated Continual Learning (FCL) also accounts for data distribution evolving over time, mirroring the dynamic nature of real-world environments. While previous studies have identified Catastrophic Forgetting and Client Drift as primary causes of performance degradation in FCL, we shed light on the importance of Incremental Bias and Federated Bias, which cause models to prioritize classes that are recently introduced or locally predominant, respectively. Our proposal constrains both biases in the last layer by efficiently finetuning a pre-trained backbone using learnable prompts, resulting in clients that produce less biased representations and more biased classifiers. Therefore, instead of solely relying on parameter aggregation, we leverage generative prototypes to effectively balance the predictions of the global model. Our method significantly improves the current State Of The Art, providing an average increase of +7.8% in accuracy. Code to reproduce the results is provided in the suppl. material.
Problem

Research questions and friction points this paper is trying to address.

Addresses catastrophic forgetting in federated continual learning
Mitigates incremental and federated biases in class prioritization
Improves global model accuracy via hierarchical generative prototypes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hierarchical generative prototypes balance global predictions
Learnable prompts fine-tune pre-trained backbone efficiently
Mitigates incremental and federated biases in last layer
🔎 Similar Papers
No similar papers found.
R
Riccardo Salami
AImageLab - University of Modena and Reggio Emilia, Modena, Italy
P
Pietro Buzzega
AImageLab - University of Modena and Reggio Emilia, Modena, Italy
M
Matteo Mosconi
AImageLab - University of Modena and Reggio Emilia, Modena, Italy
M
Mattia Verasani
AImageLab - University of Modena and Reggio Emilia, Modena, Italy
Simone Calderara
Simone Calderara
University of Modena and Reggio Emilia
Machine learningcontinual learningtrackingpattern recognition