CHiQPM: Calibrated Hierarchical Interpretable Image Classification

📅 2025-11-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Safety-critical domains demand trustworthy AI systems that simultaneously provide global interpretability and fine-grained local explanations. Method: We propose CHiQPM, a hierarchical contrastive explanation framework integrated with calibrated interpretable conformal prediction. It emulates human multi-level reasoning through hierarchical contrastive learning, prototype-based modeling at multiple semantic levels, and conformal calibration—yielding traversable, semantically coherent, hierarchical explanations alongside statistically valid prediction intervals. Contribution/Results: CHiQPM achieves 99% of the baseline model’s accuracy on image classification tasks while maintaining computational efficiency comparable to standard conformal prediction methods. Crucially, it is the first method to jointly ensure global mechanistic transparency—revealing high-level decision rationale—and local decision traceability—enabling precise attribution of individual predictions to underlying features and prototypes. This unified capability establishes a novel paradigm for human-AI collaborative, trustworthy decision-making in safety-sensitive applications.

Technology Category

Application Category

📝 Abstract
Globally interpretable models are a promising approach for trustworthy AI in safety-critical domains. Alongside global explanations, detailed local explanations are a crucial complement to effectively support human experts during inference. This work proposes the Calibrated Hierarchical QPM (CHiQPM) which offers uniquely comprehensive global and local interpretability, paving the way for human-AI complementarity. CHiQPM achieves superior global interpretability by contrastively explaining the majority of classes and offers novel hierarchical explanations that are more similar to how humans reason and can be traversed to offer a built-in interpretable Conformal prediction (CP) method. Our comprehensive evaluation shows that CHiQPM achieves state-of-the-art accuracy as a point predictor, maintaining 99% accuracy of non-interpretable models. This demonstrates a substantial improvement, where interpretability is incorporated without sacrificing overall accuracy. Furthermore, its calibrated set prediction is competitively efficient to other CP methods, while providing interpretable predictions of coherent sets along its hierarchical explanation.
Problem

Research questions and friction points this paper is trying to address.

Developing globally interpretable AI models for trustworthy safety-critical applications
Providing hierarchical explanations aligned with human reasoning processes
Maintaining high accuracy while incorporating comprehensive interpretability features
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hierarchical explanations mimicking human reasoning process
Built-in interpretable Conformal prediction method integration
Maintains 99% accuracy of non-interpretable models
🔎 Similar Papers
No similar papers found.