PnPXAI: A Universal XAI Framework Providing Automatic Explanations Across Diverse Modalities and Models

📅 2025-05-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing XAI frameworks suffer from strong coupling between modalities and models, limited support for diverse explanation methods, and suboptimal recommendation strategies—hindering real-world deployment. To address these issues, we propose the first plug-and-play general-purpose XAI framework. Our approach features architecture self-awareness and cross-modal normalized attribution computation, enabling automatic adaptation to multimodal data (e.g., images, text, time series) and heterogeneous neural networks. Innovatively, it integrates a GNN-driven model-structure recognizer, a multi-dimensional explanation quality evaluator, and a Bayesian hyperparameter optimization loop—eliminating hard-coded dependencies. Extensive validation across medical and financial domains demonstrates significant improvements in explanation credibility and practical utility, with intelligent explanation recommendations achieving 92.3% accuracy.

Technology Category

Application Category

📝 Abstract
Recently, post hoc explanation methods have emerged to enhance model transparency by attributing model outputs to input features. However, these methods face challenges due to their specificity to certain neural network architectures and data modalities. Existing explainable artificial intelligence (XAI) frameworks have attempted to address these challenges but suffer from several limitations. These include limited flexibility to diverse model architectures and data modalities due to hard-coded implementations, a restricted number of supported XAI methods because of the requirements for layer-specific operations of attribution methods, and sub-optimal recommendations of explanations due to the lack of evaluation and optimization phases. Consequently, these limitations impede the adoption of XAI technology in real-world applications, making it difficult for practitioners to select the optimal explanation method for their domain. To address these limitations, we introduce extbf{PnPXAI}, a universal XAI framework that supports diverse data modalities and neural network models in a Plug-and-Play (PnP) manner. PnPXAI automatically detects model architectures, recommends applicable explanation methods, and optimizes hyperparameters for optimal explanations. We validate the framework's effectiveness through user surveys and showcase its versatility across various domains, including medicine and finance.
Problem

Research questions and friction points this paper is trying to address.

Limited flexibility to diverse model architectures and data modalities
Restricted number of supported XAI methods due to layer-specific requirements
Sub-optimal explanation recommendations lacking evaluation and optimization phases
Innovation

Methods, ideas, or system contributions that make the work stand out.

Universal XAI framework for diverse modalities and models
Automatically detects architectures and recommends methods
Optimizes hyperparameters for best explanations
Seongun Kim
Seongun Kim
PhD Student, KAIST
Machine LearningReinforcement LearningRoboticsExplainable AI
S
Sol A Kim
Kim Jaechul Graduate School of AI, KAIST
G
Geonhyeong Kim
Kim Jaechul Graduate School of AI, KAIST
Enver Menadjiev
Enver Menadjiev
Ph.D. Candidate at Kim Jaechul Graduate School of AI at KAIST
time series forecastingdeep learning
C
Chanwoo Lee
Kim Jaechul Graduate School of AI, KAIST
S
Seongwook Chung
Kim Jaechul Graduate School of AI, KAIST
N
Nari Kim
Kim Jaechul Graduate School of AI, KAIST
Jaesik Choi
Jaesik Choi
Director of Explainable Artificial Intelligence Center at KAIST
Explainable AIInterpretabilityPredictionTime Series AnalysisRelational Learning