Exploring SAIG Methods for an Objective Evaluation of XAI

📅 2026-02-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the lack of unified objective evaluation criteria in explainable artificial intelligence (XAI), primarily due to the absence of widely accepted “ground truth” labels for explanations. To tackle this challenge, the paper presents a systematic review and analysis of Synthetic Artificial Intelligence Ground Truth (SAIG) methods, which construct artificial ground truths to enable direct assessment of XAI techniques. It introduces, for the first time, a seven-dimensional classification framework that clarifies the distinctions and underlying logic among existing SAIG approaches. Through comprehensive literature review, comparative analysis, and categorization modeling, the work elucidates the core mechanisms of these methods. The findings reveal a significant lack of consensus in XAI evaluation and underscore the urgent need for standardization, offering both a theoretical foundation and a structured analytical perspective for future benchmark development.

Technology Category

Application Category

📝 Abstract
The evaluation of eXplainable Artificial Intelligence (XAI) methods is a rapidly growing field, characterized by a wide variety of approaches. This diversity highlights the complexity of the XAI evaluation, which, unlike traditional AI assessment, lacks a universally correct ground truth for the explanation, making objective evaluation challenging. One promising direction to address this issue involves the use of what we term Synthetic Artificial Intelligence Ground truth (SAIG) methods, which generate artificial ground truths to enable the direct evaluation of XAI techniques. This paper presents the first review and analysis of SAIG methods. We introduce a novel taxonomy to classify these approaches, identifying seven key features that distinguish different SAIG methods. Our comparative study reveals a concerning lack of consensus on the most effective XAI evaluation techniques, underscoring the need for further research and standardization in this area.
Problem

Research questions and friction points this paper is trying to address.

Explainable AI
XAI evaluation
objective evaluation
ground truth
Synthetic Artificial Intelligence Ground truth
Innovation

Methods, ideas, or system contributions that make the work stand out.

Synthetic Artificial Intelligence Ground truth
XAI evaluation
taxonomy
explainable AI
objective evaluation
🔎 Similar Papers
No similar papers found.
M
Miquel Miró-Nicolau
UGiVIA Research Group, University of the Balearic Islands, Dpt. of Mathematics and Computer Science, Palma, 07122, Balearic Islands, Spain.; Laboratory for Artificial Intelligence Applications (LAIA@UIB), University of the Balearic Islands, Dpt. of Mathematics and Computer Science, Palma, 07122, Balearic Islands, Spain.
G
Gabriel Moyà-Alcover
UGiVIA Research Group, University of the Balearic Islands, Dpt. of Mathematics and Computer Science, Palma, 07122, Balearic Islands, Spain.; Laboratory for Artificial Intelligence Applications (LAIA@UIB), University of the Balearic Islands, Dpt. of Mathematics and Computer Science, Palma, 07122, Balearic Islands, Spain.
Anna Arias-Duart
Anna Arias-Duart
Barcelona Supercomputing Center (BSC)
Artificial Intelligence