Vertical Federated Continual Learning via Evolving Prototype Knowledge

📅 2025-02-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Vertical federated learning (VFL) faces dual challenges of catastrophic forgetting and strict privacy constraints in continual learning settings—specifically, class-incremental learning (CIL) and feature-incremental learning (FIL). To address these, we propose the first VFL-native continual learning framework. Our method introduces an evolutionary prototype knowledge mechanism that dynamically maintains cross-task class prototypes within the global model, and a local parameter-constrained optimization strategy that enables stable knowledge transfer while preserving data privacy. By integrating prototype networks, parameter isolation, and knowledge distillation, the framework jointly ensures representation consistency and parameter stability. Extensive experiments on standard CIL and FIL benchmarks demonstrate significant improvements: +10.39% and +35.15% accuracy over state-of-the-art methods, respectively. Our approach markedly mitigates catastrophic forgetting and enhances long-term generalization performance under privacy-preserving VFL constraints.

Technology Category

Application Category

📝 Abstract
Vertical Federated Learning (VFL) has garnered significant attention as a privacy-preserving machine learning framework for sample-aligned feature federation. However, traditional VFL approaches do not address the challenges of class and feature continual learning, resulting in catastrophic forgetting of knowledge from previous tasks. To address the above challenge, we propose a novel vertical federated continual learning method, named Vertical Federated Continual Learning via Evolving Prototype Knowledge (V-LETO), which primarily facilitates the transfer of knowledge from previous tasks through the evolution of prototypes. Specifically, we propose an evolving prototype knowledge method, enabling the global model to retain both previous and current task knowledge. Furthermore, we introduce a model optimization technique that mitigates the forgetting of previous task knowledge by restricting updates to specific parameters of the local model, thereby enhancing overall performance. Extensive experiments conducted in both CIL and FIL settings demonstrate that our method, V-LETO, outperforms the other state-of-the-art methods. For example, our method outperforms the state-of-the-art method by 10.39% and 35.15% for CIL and FIL tasks, respectively. Our code is available at https://anonymous.4open.science/r/V-LETO-0108/README.md.
Problem

Research questions and friction points this paper is trying to address.

Addresses catastrophic forgetting in VFL.
Introduces evolving prototype knowledge.
Enhances class and feature continual learning.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evolving prototype knowledge method
Model optimization technique
Vertical federated continual learning
🔎 Similar Papers
No similar papers found.
S
Shuo Wang
Beijing Institute of Technology
Keke Gai
Keke Gai
Beijing Institute of Technology
Cyber SecurityBlockchainAI SecurityPrivacy-preserving ComputationFinTech
Jing Yu
Jing Yu
Northwestern University
SustainabilityLife Cycle AnalysisTransportation ManagementOperations Research
L
Liehuang Zhu
Beijing Institute of Technology
Q
Qi Wu
School of Computer Science, The University of Adelaide