Walking the Web of Concept-Class Relationships in Incrementally Trained Interpretable Models

📅 2025-02-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses concept drift, evolving class dependencies, and degradation of concept–class relationships arising from dynamic concept and class evolution in incremental learning. To this end, we propose Multi-granularity Concept Incremental Learning (MuCIL), the first method to simultaneously mitigate forgetting at the concept level, class level, and concept–class relational level within an incremental learning setting. MuCIL introduces natural-language-aligned multimodal concept representations, enabling interpretable classification without increasing model parameters. It further incorporates relation-aware optimization, visual concept localization, and human-in-the-loop intervention mechanisms. Evaluated on multiple benchmarks, MuCIL achieves state-of-the-art performance for concept-based models, with accuracy improvements exceeding 2× on several tasks. The approach significantly enhances model interpretability and controllability while preserving parameter efficiency and relational fidelity across incremental steps.

Technology Category

Application Category

📝 Abstract
Concept-based methods have emerged as a promising direction to develop interpretable neural networks in standard supervised settings. However, most works that study them in incremental settings assume either a static concept set across all experiences or assume that each experience relies on a distinct set of concepts. In this work, we study concept-based models in a more realistic, dynamic setting where new classes may rely on older concepts in addition to introducing new concepts themselves. We show that concepts and classes form a complex web of relationships, which is susceptible to degradation and needs to be preserved and augmented across experiences. We introduce new metrics to show that existing concept-based models cannot preserve these relationships even when trained using methods to prevent catastrophic forgetting, since they cannot handle forgetting at concept, class, and concept-class relationship levels simultaneously. To address these issues, we propose a novel method - MuCIL - that uses multimodal concepts to perform classification without increasing the number of trainable parameters across experiences. The multimodal concepts are aligned to concepts provided in natural language, making them interpretable by design. Through extensive experimentation, we show that our approach obtains state-of-the-art classification performance compared to other concept-based models, achieving over 2$ imes$ the classification performance in some cases. We also study the ability of our model to perform interventions on concepts, and show that it can localize visual concepts in input images, providing post-hoc interpretations.
Problem

Research questions and friction points this paper is trying to address.

Dynamic concept-class relationships preservation
Mitigating catastrophic forgetting in incremental learning
Enhancing interpretability with multimodal concepts
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic concept-class relationship preservation
Multimodal concepts without added parameters
Interpretable natural language-aligned classifications
🔎 Similar Papers
No similar papers found.
Susmit Agrawal
Susmit Agrawal
PhD Candidate at IMPRS-IS
NeuroAIDeep LearningComputer Vision
S
Sri Deepika Vemuri
Indian Institute of Technology Hyderabad
S
Siddarth Chakaravarthy
Indian Institute of Technology Hyderabad
V
Vineeth N. Balasubramanian
Indian Institute of Technology Hyderabad