Assessing reliability of explanations in unbalanced datasets: a use-case on the occurrence of frost events

📅 2025-07-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing XAI methods exhibit insufficient reliability and trustworthiness for minority-class predictions in imbalanced datasets—particularly critical in high-stakes domains. Method: We propose the first XAI robustness evaluation framework specifically designed for minority classes. It constructs semantic neighborhoods grounded in manifold structure, then quantifies explanation stability via explanation aggregation and consistency measurement. Contribution/Results: By integrating manifold learning with XAI evaluation—departing from conventional uniform sampling assumptions—the framework significantly improves interpretability fidelity for rare events (e.g., frost detection). Experiments on multiple imbalanced tabular datasets demonstrate its effectiveness in identifying fragile explanations, substantially enhancing both the explainability of minority-class predictions and the reliability of downstream decisions.

Technology Category

Application Category

📝 Abstract
The usage of eXplainable Artificial Intelligence (XAI) methods has become essential in practical applications, given the increasing deployment of Artificial Intelligence (AI) models and the legislative requirements put forward in the latest years. A fundamental but often underestimated aspect of the explanations is their robustness, a key property that should be satisfied in order to trust the explanations. In this study, we provide some preliminary insights on evaluating the reliability of explanations in the specific case of unbalanced datasets, which are very frequent in high-risk use-cases, but at the same time considerably challenging for both AI models and XAI methods. We propose a simple evaluation focused on the minority class (i.e. the less frequent one) that leverages on-manifold generation of neighbours, explanation aggregation and a metric to test explanation consistency. We present a use-case based on a tabular dataset with numerical features focusing on the occurrence of frost events.
Problem

Research questions and friction points this paper is trying to address.

Evaluating reliability of XAI explanations in unbalanced datasets
Assessing robustness of explanations for minority class instances
Proposing metrics for explanation consistency in frost event prediction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages on-manifold neighbor generation
Uses explanation aggregation technique
Tests explanation consistency with metric
🔎 Similar Papers
No similar papers found.
I
Ilaria Vascotto
Department of Mathematics, Informatics and Geosciences, University of Trieste, Trieste, Italy
V
Valentina Blasone
Department of Mathematics, Informatics and Geosciences, University of Trieste, Trieste, Italy
Alex Rodriguez
Alex Rodriguez
University of Trieste
Machine LearningCondensed MatterStatistics Biophysicscomputational chemistrydata mining
A
Alessandro Bonaita
Assicurazioni Generali Spa, Milan, Italy
Luca Bortolussi
Luca Bortolussi
Università di Trieste
modelling and simulationexplainable artificial intelligencemachine learningformal methodscyber-physical systems