Empirical Characterization of Rationale Stability Under Controlled Perturbations for Explainable Pattern Recognition

📅 2026-04-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses a critical gap in explainable AI evaluation by proposing a novel metric that quantifies the consistency of model explanations across inputs sharing the same label or subjected to label-preserving perturbations. Specifically, the method measures the cosine similarity of SHAP values among samples with identical labels, thereby capturing explanation stability under semantic-preserving variations. By integrating both class-wise consistency and robustness to minor input perturbations into a unified evaluation framework, this work pioneers a systematic approach to assessing cross-sample explanation coherence. Experiments on SST-2 and IMDB datasets using BERT, RoBERTa, and DistilBERT demonstrate that the proposed metric effectively identifies inconsistent explanatory behaviors—such as undue reliance on specific features—and offers superior diagnostic capability compared to conventional fidelity-based metrics, ultimately contributing to the development of more trustworthy AI systems.
📝 Abstract
Reliable pattern recognition systems should exhibit consistent behavior across similar inputs, and their explanations should remain stable. However, most Explainable AI evaluations remain instance centric and do not explicitly quantify whether attribution patterns are consistent across samples that share the same class or represent small variations of the same input. In this work, we propose a novel metric aimed at assessing the consistency of model explanations, ensuring that models consistently reflect the intended objectives and consistency under label-preserving perturbations. We implement this metric using a pre-trained BERT model on the SST-2 sentiment analysis dataset, with additional robustness tests on RoBERTa, DistilBERT, and IMDB, applying SHAP to compute feature importance for various test samples. The proposed metric quantifies the cosine similarity of SHAP values for inputs with the same label, aiming to detect inconsistent behaviors, such as biased reliance on certain features or failure to maintain consistent reasoning for similar predictions. Through a series of experiments, we evaluate the ability of this metric to identify misaligned predictions and inconsistencies in model explanations. These experiments are compared against standard fidelity metrics to assess whether the new metric can effectively identify when a model's behavior deviates from its intended objectives. The proposed framework provides a deeper understanding of model behavior by enabling more robust verification of rationale stability, which is critical for building trustworthy AI systems. By quantifying whether models rely on consistent attribution patterns for similar inputs, the proposed approach supports more robust evaluation of model behavior in practical pattern recognition pipelines. Our code is publicly available at https://github.com/anmspro/ESS-XAI-Stability.
Problem

Research questions and friction points this paper is trying to address.

rationale stability
explainable AI
attribution consistency
controlled perturbations
pattern recognition
Innovation

Methods, ideas, or system contributions that make the work stand out.

rationale stability
controlled perturbations
explanation consistency
SHAP
explainable AI
🔎 Similar Papers
No similar papers found.
A
Abu Noman Md Sakib
Department of Computer Science, The University of Texas at San Antonio, San Antonio, TX, 78249, USA
Z
Zhensen Wang
Department of Computer Science, The University of Texas at San Antonio, San Antonio, TX, 78249, USA
M
Merjulah Roby
Department of Mechanical, Aerospace, and Industrial Engineering, The University of Texas at San Antonio, San Antonio, TX, 78249, USA
Zijie Zhang
Zijie Zhang
Assistant Professor, University of Texas at San Antonio
Trustworthy Machine LearningAdversaril A/DFederated LearningGraph