Debugging Concept Bottleneck Models through Removal and Retraining

πŸ“… 2025-09-23
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Concept bottleneck models (CBMs) often learn spurious concept shortcuts from biased data, leading to systematic misalignment with expert reasoning. To address this, we propose CBDebugβ€”a novel, interpretable debugging framework that leverages concept-level expert feedback. CBDebug transforms expert judgments about undesirable concepts into sample-level auxiliary labels, enabling supervised debiasing and targeted data augmentation; it further supports model retraining after concept removal. Unlike black-box retraining approaches, CBDebug provides transparent, concept-level interventions that jointly preserve interpretability and expert alignment. Extensive experiments on multiple benchmarks exhibiting spurious correlations demonstrate that CBDebug significantly outperforms existing methods: it substantially reduces reliance on misleading concepts, improves decision consistency with human experts, and enhances model robustness.

Technology Category

Application Category

πŸ“ Abstract
Concept Bottleneck Models (CBMs) use a set of human-interpretable concepts to predict the final task label, enabling domain experts to not only validate the CBM's predictions, but also intervene on incorrect concepts at test time. However, these interventions fail to address systemic misalignment between the CBM and the expert's reasoning, such as when the model learns shortcuts from biased data. To address this, we present a general interpretable debugging framework for CBMs that follows a two-step process of Removal and Retraining. In the Removal step, experts use concept explanations to identify and remove any undesired concepts. In the Retraining step, we introduce CBDebug, a novel method that leverages the interpretability of CBMs as a bridge for converting concept-level user feedback into sample-level auxiliary labels. These labels are then used to apply supervised bias mitigation and targeted augmentation, reducing the model's reliance on undesired concepts. We evaluate our framework with both real and automated expert feedback, and find that CBDebug significantly outperforms prior retraining methods across multiple CBM architectures (PIP-Net, Post-hoc CBM) and benchmarks with known spurious correlations.
Problem

Research questions and friction points this paper is trying to address.

Debugging systemic misalignment in Concept Bottleneck Models
Removing undesired concepts learned from biased data
Retraining models to reduce reliance on spurious correlations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Removal step eliminates undesired concepts
CBDebug converts concept feedback to sample labels
Supervised bias mitigation reduces spurious correlations
πŸ”Ž Similar Papers
No similar papers found.