🤖 AI Summary
Aligning human-interpretable concepts with internal representations of machine learning models constitutes a central challenge in explainable AI. This work proposes a geometric framework that formally characterizes, for the first time, the phenomenon of “concept frustration”—an ontological mismatch between human-supervised concepts and unsupervised representations learned by foundation models—and demonstrates that it arises from unobserved latent concepts. Leveraging task-aligned similarity metrics, linear Gaussian generative models, and concept-basis classifiers, the approach effectively detects concept frustration across synthetic and real-world language and vision tasks. By incorporating a frustration-aware mechanism, the method substantially improves alignment between human and machine concept representations and yields a Bayes-optimal decomposition of predictive accuracy.
📝 Abstract
Aligning human-interpretable concepts with the internal representations learned by modern machine learning systems remains a central challenge for interpretable AI. We introduce a geometric framework for comparing supervised human concepts with unsupervised intermediate representations extracted from foundation model embeddings. Motivated by the role of conceptual leaps in scientific discovery, we formalise the notion of concept frustration: a contradiction that arises when an unobserved concept induces relationships between known concepts that cannot be made consistent within an existing ontology. We develop task-aligned similarity measures that detect concept frustration between supervised concept-based models and unsupervised representations derived from foundation models, and show that the phenomenon is detectable in task-aligned geometry while conventional Euclidean comparisons fail. Under a linear-Gaussian generative model we derive a closed-form expression for Bayes-optimal concept-based classifier accuracy, decomposing predictive signal into known-known, known-unknown and unknown-unknown contributions and identifying analytically where frustration affects performance. Experiments on synthetic data and real language and vision tasks demonstrate that frustration can be detected in foundation model representations and that incorporating a frustrating concept into an interpretable model reorganises the geometry of learned concept representations, to better align human and machine reasoning. These results suggest a principled framework for diagnosing incomplete concept ontologies and aligning human and machine conceptual reasoning, with implications for the development and validation of safe interpretable AI for high-risk applications.