Beyond Explainable AI (XAI): An Overdue Paradigm Shift and Post-XAI Research Directions

📅 2026-02-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current explainable artificial intelligence (XAI) is hindered by empirical and conceptual shortcomings—including paradoxes, conceptual ambiguities, and erroneous assumptions—that impede its ability to effectively enhance the reliability and trustworthiness of AI systems. This work systematically uncovers the fundamental limitations of XAI in deep neural networks and large language models and proposes a novel “post-XAI” paradigm. This paradigm integrates four dimensions: interactive AI verification protocols, an epistemological framework for AI, context-aware user modeling, and model-centric interpretability analysis. By shifting the focus of AI development from post hoc explanation toward prospective certification and the establishment of scientific foundations, this comprehensive approach offers both theoretical grounding and a research roadmap for building reliable, certifiable artificial intelligence systems.

Technology Category

Application Category

📝 Abstract
This study provides a cross-disciplinary examination of Explainable Artificial Intelligence (XAI) approaches-focusing on deep neural networks (DNNs) and large language models (LLMs)-and identifies empirical and conceptual limitations in current XAI. We discuss critical symptoms that stem from deeper root causes (i.e., two paradoxes, two conceptual confusions, and five false assumptions). These fundamental problems within the current XAI research field reveal three insights: experimentally, XAI exhibits significant flaws; conceptually, it is paradoxical; and pragmatically, further attempts to reform the paradoxical XAI might exacerbate its confusion-demanding fundamental shifts and new research directions. To move beyond XAI's limitations, we propose a four-pronged synthesized paradigm shift toward reliable and certified AI development. These four components include: verification-focused Interactive AI (IAI) to establish scientific community protocols for certifying AI system performance rather than attempting post-hoc explanations, AI Epistemology for rigorous scientific foundations, User-Sensible AI to create context-aware systems tailored to specific user communities, and Model-Centered Interpretability for faithful technical analysis-together offering comprehensive post-XAI research directions.
Problem

Research questions and friction points this paper is trying to address.

Explainable AI
paradoxes
conceptual confusions
false assumptions
AI interpretability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Post-XAI
Interactive AI
AI Epistemology
User-Sensible AI
Model-Centered Interpretability
S
Saleh Afroogh
University of Texas at Austin
S
Seyd Ishtiaque Ahmed
University of Toronto
Petra Ahrweiler
Petra Ahrweiler
Professor Sociology of Technology and Innovation / Social Simulation, Johannes Gutenberg University
Innovation networksPolicy Modelling
David Alvarez-Melis
David Alvarez-Melis
Harvard University & Microsoft Research
Machine LearningOptimal TransportNatural Language ProcessingInterpretability
Mansur Maturidi Arief
Mansur Maturidi Arief
Stanford University
Rare-event simulationimportance samplingautonomous vehiclesafety analysissustainability
Emilia Barakova
Emilia Barakova
Eindhoven University of Technology
Human-robot interactioninteraction designEmbodied AI
F
Falco J. Bargagli-Stoffi
University of California, Los Angeles
E
Erdem Biyik
University of Southern California
Hanjie Chen
Hanjie Chen
Rice University
Natural Language ProcessingInterpretable Machine Learning
Xiang 'Anthony' Chen
Xiang 'Anthony' Chen
Associate Professor, UCLA
Human-Computer Interaction
R
Robert Clements
University of San Francisco
K
Keeley Crockett
Manchester Metropolitan University
Amit Dhurandhar
Amit Dhurandhar
Principal Research Scientist, IBM
artificial intelligencemachine learningdata mining
Fethiye Irmak Dogan
Fethiye Irmak Dogan
Postdoctoral Research Associate, University of Cambridge
Human-Robot InteractionRobot LearningExplainabilityConversational AIDeep Learning
M
Mollie Dollinger
Curtin University
Motahhare Eslami
Motahhare Eslami
Carnegie Mellon University
Human-Computer InteractionSocial ComputingData Mining
A
Aldo A Faisal
Imperial College London
Arya Farahi
Arya Farahi
University of Texas at Austin
Machine LearningStatistical InferenceAstroinformaticsTrustworthy AIExplainable AI
M
Melanie Fernandez Pradie
Microsoft Research
S
Saadia Gabrie
University of California, Los Angeles
D
Diego Garcia-Olano
Meta
M
Marzyeh Ghassemi
Massachusetts Institute of Technology
S
Shaona Ghosh
Nvidia
Hatice Gunes
Hatice Gunes
Full Professor of Affective Intelligence & Robotics, University of Cambridge
Artificial IntelligenceAffective AIHealth AIAI FairnessSocially Assistive Robotics
Ehsan Hajiramezanali
Ehsan Hajiramezanali
Principal Research Scientist, Genentech
Machine LearningDeep LearningBayesian StatisticsGraph Neural NetworksDrug Discovery