🤖 AI Summary
This work addresses theoretical and practical challenges in unsupervised, interpretable concept learning from high-dimensional data (e.g., images), particularly mitigating concept drift and concept leakage. Methodologically, it introduces the first concept learning framework with provable theoretical correctness and optimal label complexity guarantees, integrating causal representation learning with concept alignment—without assuming concept independence or requiring human annotations or interventions. The approach jointly employs linear and nonparametric concept mapping estimation, finite-sample high-probability analysis, and asymptotic consistency theory. Experiments on synthetic and image benchmarks demonstrate that the learned concepts exhibit high-fidelity causal semantics, drastically reduce annotation burden, and outperform existing concept bottleneck models (CBMs) in robustness, interpretability, and automation.
📝 Abstract
Machine learning is a vital part of many real-world systems, but several concerns remain about the lack of interpretability, explainability and robustness of black-box AI systems. Concept-based models (CBM) address some of these challenges by learning interpretable concepts from high-dimensional data, e.g. images, which are used to predict labels. An important issue in CBMs is concept leakage, i.e., spurious information in the learned concepts, which effectively leads to learning"wrong"concepts. Current mitigating strategies are heuristic, have strong assumptions, e.g., they assume that the concepts are statistically independent of each other, or require substantial human interaction in terms of both interventions and labels provided by annotators. In this paper, we describe a framework that provides theoretical guarantees on the correctness of the learned concepts and on the number of required labels, without requiring any interventions. Our framework leverages causal representation learning (CRL) to learn high-level causal variables from low-level data, and learns to align these variables with interpretable concepts. We propose a linear and a non-parametric estimator for this mapping, providing a finite-sample high probability result in the linear case and an asymptotic consistency result for the non-parametric estimator. We implement our framework with state-of-the-art CRL methods, and show its efficacy in learning the correct concepts in synthetic and image benchmarks.