🤖 AI Summary
This work addresses a critical gap in current transparency research: the neglect of foundational AI concepts—particularly the lack of uncertainty quantification in counterfactual explanations. We propose a unified framework that integrates uncertainty quantification with *ante hoc* (pre-decision) interpretability. Theoretically, we establish the conceptual homology between epistemic and aleatoric uncertainty and, for the first time, jointly model both within the counterfactual generation process, thereby bridging core AI principles with XAI. Methodologically, our approach unifies Bayesian inference, probabilistic modeling, and counterfactual generation to yield robust, reliable, and human-understandable counterfactuals for inherently transparent models. Empirically, the framework enhances model trustworthiness and human-AI decision synergy, advancing human-centered, uncertainty-aware explainable AI.
📝 Abstract
This position paper argues that, to its detriment, transparency research overlooks many foundational concepts of artificial intelligence. Here, we focus on uncertainty quantification -- in the context of ante-hoc interpretability and counterfactual explainability -- showing how its adoption could address key challenges in the field. First, we posit that uncertainty and ante-hoc interpretability offer complementary views of the same underlying idea; second, we assert that uncertainty provides a principled unifying framework for counterfactual explainability. Consequently, inherently transparent models can benefit from human-centred explanatory insights -- like counterfactuals -- which are otherwise missing. At a higher level, integrating artificial intelligence fundamentals into transparency research promises to yield more reliable, robust and understandable predictive models.