🤖 AI Summary
Although CLIP demonstrates strong performance in zero-shot image recognition, its predictions lack interpretability, and existing explanation methods rely on manually annotated concepts with limited generalization. This work proposes an unsupervised explanation approach that constructs a semantic concept space from language descriptions and projects CLIP’s image-text embeddings into this space via a joint alignment-and-reconstruction objective. The method generates human-understandable, concept-level explanations without compromising zero-shot accuracy. Experiments show that the proposed approach preserves CLIP’s original performance across five benchmarks—CIFAR-100, CUB-200-2011, Places365, ImageNet-100, and ImageNet-1k—while delivering transparent and generalizable semantic explanations.
📝 Abstract
Large-scale vision-language models such as CLIP have achieved remarkable success in zero-shot image recognition, yet their predictions remain largely opaque to human understanding. In contrast, Concept Bottleneck Models provide interpretable intermediate representations by reasoning through human-defined concepts, but they rely on concept supervision and lack the ability to generalize to unseen classes. We introduce EZPC that bridges these two paradigms by explaining CLIP's zero-shot predictions through human-understandable concepts. Our method projects CLIP's joint image-text embeddings into a concept space learned from language descriptions, enabling faithful and transparent explanations without additional supervision. The model learns this projection via a combination of alignment and reconstruction objectives, ensuring that concept activations preserve CLIP's semantic structure while remaining interpretable. Extensive experiments on five benchmark datasets, CIFAR-100, CUB-200-2011, Places365, ImageNet-100, and ImageNet-1k, demonstrate that our approach maintains CLIP's strong zero-shot classification accuracy while providing meaningful concept-level explanations. By grounding open-vocabulary predictions in explicit semantic concepts, our method offers a principled step toward interpretable and trustworthy vision-language models. Code is available at https://github.com/oonat/ezpc.