🤖 AI Summary
Current explainable artificial intelligence (XAI) suffers from two fundamental issues: ambiguous explanation objectives that diverge from human intuition, and a persistent epistemological and methodological disconnect from applied statistics. This paper establishes, for the first time, a systematic analogy framework bridging XAI and applied statistics, advancing the core thesis that “explanation is a statistical functional of a high-dimensional predictive function.” It reframes explanation as a statistical inference problem. Through conceptual framework construction, statistical-philosophical analysis, and cross-disciplinary methodological mapping, the work clarifies explanation’s hierarchical objectives, usage paradigms, and evaluation logic. The resulting framework provides a statistically coherent theoretical foundation and practical guidelines for XAI algorithm design, empirical evaluation, and pedagogy—thereby catalyzing a paradigm shift from “algorithm-centric” to “interpretation-centric” research and practice.
📝 Abstract
In the rapidly growing literature on explanation algorithms, it often remains unclear what precisely these algorithms are for and how they should be used. In this position paper, we argue for a novel and pragmatic perspective: Explainable machine learning needs to recognize its parallels with applied statistics. Concretely, explanations are statistics of high-dimensional functions, and we should think about them analogously to traditional statistical quantities. Among others, this implies that we must think carefully about the matter of interpretation, or how the explanations relate to intuitive questions that humans have about the world. The fact that this is scarcely being discussed in research papers is one of the main drawbacks of the current literature. Luckily, the analogy between explainable machine learning and applied statistics suggests fruitful ways for how research practices can be improved.