DOLFIN: Balancing Stability and Plasticity in Federated Continual Learning

📅 2025-10-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Federated continual learning (FCL) faces the fundamental trade-off between stability and plasticity, while simultaneously struggling to ensure privacy preservation and communication efficiency. To address these challenges, we propose DOLFIN: a vision Transformer–based framework that introduces Orthogonal Low-Rank Adaptation (Orthogonal LoRA) for lightweight incremental model updates, and a Dual Gradient Projection Memory (DualGPM) mechanism that dynamically constrains local parameter update directions to retain prior knowledge. DOLFIN natively supports differential privacy and low-bandwidth communication by transmitting only adapter parameters—not full models. Evaluated on four non-i.i.d. benchmarks—including CIFAR-100 and ImageNet-R—DOLFIN consistently outperforms six state-of-the-art baselines, achieving average accuracy gains of 2.1–5.7% while maintaining memory overhead comparable to baseline methods.

Technology Category

Application Category

📝 Abstract
Federated continual learning (FCL) enables models to learn new tasks across multiple distributed clients, protecting privacy and without forgetting previously acquired knowledge. However, current methods face challenges balancing performance, privacy preservation, and communication efficiency. We introduce a Distributed Online LoRA for Federated INcremental learning method DOLFIN, a novel approach combining Vision Transformers with low-rank adapters designed to efficiently and stably learn new tasks in federated environments. Our method leverages LoRA for minimal communication overhead and incorporates DualGradient Projection Memory (DualGPM) to prevent forgetting. Evaluated on CIFAR-100, ImageNet-R, ImageNet-A, and CUB-200 under two Dirichlet heterogeneity settings, DOLFIN consistently surpasses six strong baselines in final average accuracy while matching their memory footprint. Orthogonal low-rank adapters offer an effective and scalable solution for privacy-preserving continual learning in federated settings.
Problem

Research questions and friction points this paper is trying to address.

Balancing stability and plasticity in federated continual learning systems
Addressing privacy preservation and communication efficiency challenges
Preventing catastrophic forgetting while learning new distributed tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines Vision Transformers with low-rank adapters
Uses LoRA for minimal communication overhead
Incorporates DualGPM to prevent catastrophic forgetting
🔎 Similar Papers
No similar papers found.
O
Omayma Moussadek
AImageLab, University of Modena and Reggio Emilia, Modena, Italy
R
Riccardo Salami
AImageLab, University of Modena and Reggio Emilia, Modena, Italy
Simone Calderara
Simone Calderara
University of Modena and Reggio Emilia
Machine learningcontinual learningtrackingpattern recognition