CIP-Net: Continual Interpretable Prototype-based Network

📅 2025-12-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In continual learning, models suffer from catastrophic forgetting—degrading performance on previously learned tasks—while existing eXplainable AI (XAI) methods often rely on post-hoc explanations or memory-intensive exemplar replay, limiting scalability. To address this, we propose an exemplar-free self-explaining prototype network: it employs class prototypes as inherently interpretable representations, jointly optimizing prototype dynamics and similarity-based classification via end-to-end gradient descent. Explanations are generated intrinsically during inference, without storing historical samples or auxiliary memory modules. Our approach achieves state-of-the-art performance under both task-incremental and class-incremental settings, significantly outperforming prior exemplar-free self-explaining methods. It reduces memory overhead by an order of magnitude while uniquely unifying high interpretability, robust knowledge retention, and low resource consumption.

Technology Category

Application Category

📝 Abstract
Continual learning constrains models to learn new tasks over time without forgetting what they have already learned. A key challenge in this setting is catastrophic forgetting, where learning new information causes the model to lose its performance on previous tasks. Recently, explainable AI has been proposed as a promising way to better understand and reduce forgetting. In particular, self-explainable models are useful because they generate explanations during prediction, which can help preserve knowledge. However, most existing explainable approaches use post-hoc explanations or require additional memory for each new task, resulting in limited scalability. In this work, we introduce CIP-Net, an exemplar-free self-explainable prototype-based model designed for continual learning. CIP-Net avoids storing past examples and maintains a simple architecture, while still providing useful explanations and strong performance. We demonstrate that CIPNet achieves state-of-the-art performances compared to previous exemplar-free and self-explainable methods in both task- and class-incremental settings, while bearing significantly lower memory-related overhead. This makes it a practical and interpretable solution for continual learning.
Problem

Research questions and friction points this paper is trying to address.

Addresses catastrophic forgetting in continual learning without storing past examples.
Introduces a self-explainable prototype-based model for scalable continual learning.
Reduces memory overhead while maintaining interpretability and state-of-the-art performance.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-explainable prototype-based model for continual learning
Exemplar-free design avoids storing past examples
Achieves state-of-the-art performance with low memory overhead
🔎 Similar Papers
No similar papers found.
F
Federico Di Valerio
Department of Computer, Control and Management Engineering (DIAG), Sapienza University, IT-00185, Rome, Italy
M
Michela Proietti
Department of Computer, Control and Management Engineering (DIAG), Sapienza University, IT-00185, Rome, Italy
A
Alessio Ragno
INSA Lyon, CNRS, LIRIS UMR 5205, FR-94276, Villeurbanne France
Roberto Capobianco
Roberto Capobianco
Sapienza University of Rome - Sony AI
RoboticsRobot LearningArtificial IntelligenceReinforcement Learning