🤖 AI Summary
Concept Bottleneck Models (CBMs) exhibit insufficient robustness in concept identification under distribution shifts, necessitating a fine-grained evaluation benchmark. Method: We introduce SUB—the first synthetic benchmark tailored for concept robustness—comprising 38,400 images derived from the CUB dataset (33 bird species and 45 fine-grained attributes), enabling controlled concept substitution to assess generalization. We further propose Tied Diffusion Guidance, a diffusion-based generative method that jointly conditions on class and attribute labels via shared noise scheduling, yielding high-fidelity, interpretable concept-level image synthesis. Contribution/Results: Experiments reveal substantial performance degradation of CBMs under concept perturbations, exposing critical generalization bottlenecks. SUB is publicly released with open-source code, establishing a new paradigm and foundational infrastructure for robustness-aware, interpretable AI research.
📝 Abstract
Concept Bottleneck Models (CBMs) and other concept-based interpretable models show great promise for making AI applications more transparent, which is essential in fields like medicine. Despite their success, we demonstrate that CBMs struggle to reliably identify the correct concepts under distribution shifts. To assess the robustness of CBMs to concept variations, we introduce SUB: a fine-grained image and concept benchmark containing 38,400 synthetic images based on the CUB dataset. To create SUB, we select a CUB subset of 33 bird classes and 45 concepts to generate images which substitute a specific concept, such as wing color or belly pattern. We introduce a novel Tied Diffusion Guidance (TDG) method to precisely control generated images, where noise sharing for two parallel denoising processes ensures that both the correct bird class and the correct attribute are generated. This novel benchmark enables rigorous evaluation of CBMs and similar interpretable models, contributing to the development of more robust methods. Our code is available at https://github.com/ExplainableML/sub and the dataset at http://huggingface.co/datasets/Jessica-bader/SUB.