🤖 AI Summary
Current explainable artificial intelligence (XAI) is hindered by empirical and conceptual shortcomings—including paradoxes, conceptual ambiguities, and erroneous assumptions—that impede its ability to effectively enhance the reliability and trustworthiness of AI systems. This work systematically uncovers the fundamental limitations of XAI in deep neural networks and large language models and proposes a novel “post-XAI” paradigm. This paradigm integrates four dimensions: interactive AI verification protocols, an epistemological framework for AI, context-aware user modeling, and model-centric interpretability analysis. By shifting the focus of AI development from post hoc explanation toward prospective certification and the establishment of scientific foundations, this comprehensive approach offers both theoretical grounding and a research roadmap for building reliable, certifiable artificial intelligence systems.
📝 Abstract
This study provides a cross-disciplinary examination of Explainable Artificial Intelligence (XAI) approaches-focusing on deep neural networks (DNNs) and large language models (LLMs)-and identifies empirical and conceptual limitations in current XAI. We discuss critical symptoms that stem from deeper root causes (i.e., two paradoxes, two conceptual confusions, and five false assumptions). These fundamental problems within the current XAI research field reveal three insights: experimentally, XAI exhibits significant flaws; conceptually, it is paradoxical; and pragmatically, further attempts to reform the paradoxical XAI might exacerbate its confusion-demanding fundamental shifts and new research directions. To move beyond XAI's limitations, we propose a four-pronged synthesized paradigm shift toward reliable and certified AI development. These four components include: verification-focused Interactive AI (IAI) to establish scientific community protocols for certifying AI system performance rather than attempting post-hoc explanations, AI Epistemology for rigorous scientific foundations, User-Sensible AI to create context-aware systems tailored to specific user communities, and Model-Centered Interpretability for faithful technical analysis-together offering comprehensive post-XAI research directions.