🤖 AI Summary
Existing concept embedding methods struggle to model inter-concept relationships and rely heavily on multi-granularity human annotations, limiting both interpretability and practical applicability. This work proposes Hierarchical Concept Embedding Models (HiCEMs), which introduce hierarchical structure into concept representations for the first time and integrate an unsupervised concept splitting technique to automatically discover fine-grained subconcepts from pretrained models. HiCEMs generate multi-level, interpretable embeddings without requiring additional annotations and support test-time multi-granularity interventions. Evaluated across multiple datasets—including the newly introduced PseudoKitchens—the approach demonstrates its ability to uncover human-understandable subconcepts, improve task accuracy, and enable effective explanation and intervention.
📝 Abstract
Modern deep neural networks remain challenging to interpret due to the opacity of their latent representations, impeding model understanding, debugging, and debiasing. Concept Embedding Models (CEMs) address this by mapping inputs to human-interpretable concept representations from which tasks can be predicted. Yet, CEMs fail to represent inter-concept relationships and require concept annotations at different granularities during training, limiting their applicability. In this paper, we introduce Hierarchical Concept Embedding Models (HiCEMs), a new family of CEMs that explicitly model concept relationships through hierarchical structures. To enable HiCEMs in real-world settings, we propose Concept Splitting, a method for automatically discovering finer-grained sub-concepts from a pretrained CEM's embedding space without requiring additional annotations. This allows HiCEMs to generate fine-grained explanations from limited concept labels, reducing annotation burdens. Our evaluation across multiple datasets, including a user study and experiments on PseudoKitchens, a newly proposed concept-based dataset of 3D kitchen renders, demonstrates that (1) Concept Splitting discovers human-interpretable sub-concepts absent during training that can be used to train highly accurate HiCEMs, and (2) HiCEMs enable powerful test-time concept interventions at different granularities, leading to improved task accuracy.