🤖 AI Summary
Persistent conflict between privacy preservation and knowledge retention in continual learning (CL) remains unresolved. Method: This work establishes the first systematic theoretical framework for differentially private (DP) continual learning, introducing a task-agnostic privacy budget composition analysis. It proposes a novel DP output mechanism operating over a task-independent label space, integrated with a hybrid architecture combining DP prototype-based classification and parameter-efficient DP adapters. Contribution/Results: Under Rényi DP guarantees (ε ≤ 8), the method significantly outperforms existing baselines across challenging scenarios—including domain shift, ambiguous task boundaries, and multi-label settings—while simultaneously achieving strong privacy protection and high accuracy. It empirically validates the feasibility of the privacy–utility trade-off in CL and provides both a verifiable theoretical model and a practical architecture for synergistic DP-CL deployment.
📝 Abstract
The goal of continual learning (CL) is to retain knowledge across tasks, but this conflicts with strict privacy required for sensitive training data that prevents storing or memorising individual samples. This work explores the intersection of CL and differential privacy (DP). We advance the theoretical understanding and introduce methods for combining CL and DP. We formulate and clarify the theory for DP CL focusing on composition over tasks. We introduce different variants of choosing classifiers' output label space, show that choosing the output label space directly based on the task data is not DP, and offer a DP alternative. We propose a method for combining pre-trained models with DP prototype classifiers and parameter-efficient adapters learned under DP to address the trade-offs between privacy and utility in a CL setting. We also demonstrate the effectiveness of our methods for varying degrees of domain shift, for blurry tasks, and with different output label settings.