Onto-Epistemological Analysis of AI Explanations

📅 2025-10-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The “black-box” nature of deep learning models hinders their trustworthy deployment, while mainstream eXplainable AI (XAI) methods often rely on unexamined ontological (what constitutes “existence” in explanation) and epistemological (how we “know” an explanation) assumptions. Method: This paper conducts the first systematic onto-epistemological critique of XAI techniques, exposing foundational philosophical commitments—such as local attribution versus global rule-based reasoning, objective truth versus stakeholder-relative understanding—and empirically analyzing their impact on explanation validity and uptake in high-stakes domains like healthcare and criminal justice. Contribution/Results: We demonstrate that seemingly minor technical design choices encode significant philosophical stances; overlooking these leads to ineffective or harmful explanations. Accordingly, we propose an application-oriented XAI selection framework grounded in ontological and epistemological alignment, advancing XAI from ad hoc engineering toward interdisciplinary, rationally grounded design.

Technology Category

Application Category

📝 Abstract
Artificial intelligence (AI) is being applied in almost every field. At the same time, the currently dominant deep learning methods are fundamentally black-box systems that lack explanations for their inferences, significantly limiting their trustworthiness and adoption. Explainable AI (XAI) methods aim to overcome this challenge by providing explanations of the models' decision process. Such methods are often proposed and developed by engineers and scientists with a predominantly technical background and incorporate their assumptions about the existence, validity, and explanatory utility of different conceivable explanatory mechanisms. However, the basic concept of an explanation -- what it is, whether we can know it, whether it is absolute or relative -- is far from trivial and has been the subject of deep philosophical debate for millennia. As we point out here, the assumptions incorporated into different XAI methods are not harmless and have important consequences for the validity and interpretation of AI explanations in different domains. We investigate ontological and epistemological assumptions in explainability methods when they are applied to AI systems, meaning the assumptions we make about the existence of explanations and our ability to gain knowledge about those explanations. Our analysis shows how seemingly small technical changes to an XAI method may correspond to important differences in the underlying assumptions about explanations. We furthermore highlight the risks of ignoring the underlying onto-epistemological paradigm when choosing an XAI method for a given application, and we discuss how to select and adapt appropriate XAI methods for different domains of application.
Problem

Research questions and friction points this paper is trying to address.

Analyzing philosophical assumptions in explainable AI methods
Examining how technical choices reflect ontological/epistemological positions
Addressing risks of mismatched XAI paradigms in applications
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analyzes ontological assumptions in explainability methods
Examines epistemological foundations of AI explanation techniques
Evaluates XAI method selection based on philosophical paradigms
🔎 Similar Papers
No similar papers found.
M
Martina Mattioli
Ca’ Foscari University, Venice, Italy
E
Eike Petersen
Denmark Technical University, Lyngby, Denmark
Aasa Feragen
Aasa Feragen
Professor, DTU Compute
Machine learningmedical imaginggeometric modelling
Marcello Pelillo
Marcello Pelillo
Professor of Computer Science, FIEEE, FIAPR, FAAIA, Ca' Foscari University of Venice & ZJNU
Computer VisionMachine LearningPattern Recognition
S
Siavash A. Bigdeli
Denmark Technical University, Lyngby, Denmark