🤖 AI Summary
This work proposes an uncertainty-aware concept bottleneck model that addresses a critical limitation in existing approaches: the neglect of uncertainty in concept annotations generated by large language models (LLMs). Current methods are prone to hallucination and fail to incorporate this uncertainty into training. To remedy this, the proposed framework introduces, for the first time, a distribution-agnostic mechanism to quantify uncertainty in LLM-generated concept labels and effectively integrates this uncertainty into the model training process. By synergistically combining LLMs, concept bottleneck architectures, and confidence interval techniques, the method significantly enhances model robustness and reliability across multiple real-world datasets.
📝 Abstract
Concept Bottleneck Models (CBMs) provide inherent interpretability by first mapping input samples to high-level semantic concepts, followed by a combination of these concepts for the final classification. However, the annotation of human-understandable concepts requires extensive expert knowledge and labor, constraining the broad adoption of CBMs. On the other hand, there are a few works that leverage the knowledge of large language models (LLMs) to construct concept bottlenecks. Nevertheless, they face two essential limitations: First, they overlook the uncertainty associated with the concepts annotated by LLMs and lack a valid mechanism to quantify uncertainty about the annotated concepts, increasing the risk of errors due to hallucinations from LLMs. Additionally, they fail to incorporate the uncertainty associated with these annotations into the learning process for concept bottleneck models. To address these limitations, we propose a novel uncertainty-aware CBM method, which not only rigorously quantifies the uncertainty of LLM-annotated concept labels with valid and distribution-free guarantees, but also incorporates quantified concept uncertainty into the CBM training procedure to account for varying levels of reliability across LLM-annotated concepts. We also provide the theoretical analysis for our proposed method. Extensive experiments on the real-world datasets validate the desired properties of our proposed methods.