🤖 AI Summary
Weak interpretability of large language models (LLMs), unreliability of post-hoc explanation methods, and limitations of concept bottleneck models (CBMs)—including dependence on costly human annotations, restricted representational capacity, and lack of interpretability at the task level—motivate this work. We propose the self-supervised Interpretable Concept Embedding Model (ICEM), the first framework to introduce concept modeling into the textual domain. ICEM leverages the inherent generalization capability of LLMs to autonomously predict concept labels without manual annotation. It enables end-to-end interpretable prediction via concept embeddings and an interpretable decision function, supporting concept intervention, logical attribution, and controllable decoding-path steering. On text classification tasks, ICEM achieves performance comparable to fully supervised CBMs and black-box LLMs, while providing human-understandable, causally grounded explanations. Thus, ICEM unifies interpretability, interactivity, and controllability in a single architecture.
📝 Abstract
Despite their success, Large-Language Models (LLMs) still face criticism as their lack of interpretability limits their controllability and reliability. Traditional post-hoc interpretation methods, based on attention and gradient-based analysis, offer limited insight into the model's decision-making processes. In the image field, Concept-based models have emerged as explainable-by-design architectures, employing human-interpretable features as intermediate representations. However, these methods have not been yet adapted to textual data, mainly because they require expensive concept annotations, which are impractical for real-world text data. This paper addresses this challenge by proposing a self-supervised Interpretable Concept Embedding Models (ICEMs). We leverage the generalization abilities of LLMs to predict the concepts labels in a self-supervised way, while we deliver the final predictions with an interpretable function. The results of our experiments show that ICEMs can be trained in a self-supervised way achieving similar performance to fully supervised concept-based models and end-to-end black-box ones. Additionally, we show that our models are (i) interpretable, offering meaningful logical explanations for their predictions; (ii) interactable, allowing humans to modify intermediate predictions through concept interventions; and (iii) controllable, guiding the LLMs' decoding process to follow a required decision-making path.