Emergent, not Immanent: A Baradian Reading of Explainable AI

📅 2026-01-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study challenges the prevailing assumption in explainable artificial intelligence (XAI) that explanations are intrinsic properties of models, arguing instead that such approaches overlook crucial ontological and epistemological presuppositions. Drawing on Barad’s agential realism, the paper reconceptualizes explanation as an emergent, generative practice arising from the material-discursive entanglement of humans, models, contexts, and explanatory apparatuses within specific situated encounters—rather than as a fixed feature embedded within models themselves. Through a philosophical critique of existing XAI methods and the development of a speculative design prototype for a text-to-music XAI interface, this work reconfigures the ontological foundations of XAI, exposes its implicit assumptions and ethical implications, and offers a design pathway toward next-generation interfaces that support emergent, contextually grounded forms of explanation.

Technology Category

Application Category

📝 Abstract
Explainable AI (XAI) is frequently positioned as a technical problem of revealing the inner workings of an AI model. This position is affected by unexamined onto-epistemological assumptions: meaning is treated as immanent to the model, the explainer is positioned outside the system, and a causal structure is presumed recoverable through computational techniques. In this paper, we draw on Barad's agential realism to develop an alternative onto-epistemology of XAI. We propose that interpretations are material-discursive performances that emerge from situated entanglements of the AI model with humans, context, and the interpretative apparatus. To develop this position, we read a comprehensive set of XAI methods through agential realism and reveal the assumptions and limitations that underpin several of these methods. We then articulate the framework's ethical dimension and propose design directions for XAI interfaces that support emergent interpretation, using a speculative text-to-music interface as a case study.
Problem

Research questions and friction points this paper is trying to address.

Explainable AI
onto-epistemology
agential realism
interpretation
material-discursive
Innovation

Methods, ideas, or system contributions that make the work stand out.

agential realism
emergent interpretation
material-discursive practices
situated entanglement
explainable AI
🔎 Similar Papers
No similar papers found.