π€ AI Summary
Current explainable artificial intelligence (XAI) faces challenges such as methodological fragmentation, conflicting evaluation metrics, and ongoing debates over robustness and fairness, largely due to the absence of a unified evaluation framework and a grounding βground truth.β This work reframes XAI as a causal inference problem and, for the first time, explicitly identifies the underlying causal model of a system as the true ground truth for explanations. It rigorously argues that such a causal model is both necessary and sufficient for achieving genuine interpretability. By integrating causal discovery with high-level concept learning, the study establishes the causal model as the theoretical foundation of XAI, offering a coherent framework to unify research paradigms and resolve longstanding disagreements in the field, while charting clear directions for future inquiry.
π Abstract
The demand for Explainable AI (XAI) has triggered an explosion of methods, producing a landscape so fragmented that we now rely on surveys of surveys. Yet, fundamental challenges persist: conflicting metrics, failed sanity checks, and unresolved debates over robustness and fairness. The only consensus on how to achieve explainability is a lack of one. This has led many to point to the absence of a ground truth for defining ``the'' correct explanation as the main culprit.
This position paper posits that the persistent discord in XAI arises not from an absent ground truth but from a ground truth that exists, albeit as an elusive and challenging target: the causal model that governs the relevant system. By reframing XAI queries about data, models, or decisions as causal inquiries, we prove the necessity and sufficiency of causal models for XAI. We contend that without this causal grounding, XAI remains unmoored. Ultimately, we encourage the community to converge around advanced concept and causal discovery to escape this entrenched uncertainty.