Language Guided Concept Bottleneck Models for Interpretable Continual Learning

📅 2025-03-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Continual learning (CL) faces the dual challenge of mitigating catastrophic forgetting while ensuring decision interpretability. To address this, we propose the first language-guided concept bottleneck model for CL, introducing a CLIP-based semantic alignment framework at the concept layer that establishes human-understandable and cross-task generalizable decision mechanisms. Our method enables differentiable, reusable concept-level reasoning via natural language supervision and supports concept-level attribution visualization, thereby jointly enhancing knowledge retention and decision transparency. Evaluated on the ImageNet-subset continual learning benchmark, our approach achieves a 3.06% improvement in average accuracy over state-of-the-art methods. By unifying semantic grounding, concept reuse, and interpretable inference within a continual learning setting, this work establishes a novel paradigm for explainable continual learning.

Technology Category

Application Category

📝 Abstract
Continual learning (CL) aims to enable learning systems to acquire new knowledge constantly without forgetting previously learned information. CL faces the challenge of mitigating catastrophic forgetting while maintaining interpretability across tasks. Most existing CL methods focus primarily on preserving learned knowledge to improve model performance. However, as new information is introduced, the interpretability of the learning process becomes crucial for understanding the evolving decision-making process, yet it is rarely explored. In this paper, we introduce a novel framework that integrates language-guided Concept Bottleneck Models (CBMs) to address both challenges. Our approach leverages the Concept Bottleneck Layer, aligning semantic consistency with CLIP models to learn human-understandable concepts that can generalize across tasks. By focusing on interpretable concepts, our method not only enhances the models ability to retain knowledge over time but also provides transparent decision-making insights. We demonstrate the effectiveness of our approach by achieving superior performance on several datasets, outperforming state-of-the-art methods with an improvement of up to 3.06% in final average accuracy on ImageNet-subset. Additionally, we offer concept visualizations for model predictions, further advancing the understanding of interpretable continual learning.
Problem

Research questions and friction points this paper is trying to address.

Mitigate catastrophic forgetting in continual learning
Maintain interpretability across evolving tasks
Align semantic concepts for transparent decision-making
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates language-guided Concept Bottleneck Models
Aligns semantic consistency with CLIP models
Enhances interpretability and knowledge retention
🔎 Similar Papers
L
Lu Yu
School of Computer Science and Engineering, Tianjin University of Technology
Haoyu Han
Haoyu Han
Michigan State University
Machine Learning on GraphsGraph for LLMs
Z
Zhe Tao
School of Computer Science and Engineering, Tianjin University of Technology
H
Hantao Yao
School of Information Science and Technology, University of Science and Technology of China
Changsheng Xu
Changsheng Xu
Professor, Institute of Automation, Chinese Academy of Sciences
MultimediaComputer vision