Exploring Commonalities in Explanation Frameworks: A Multi-Domain Survey Analysis

📅 2024-05-20
🏛️ xAI
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the lack of a domain-agnostic, human-centered explainable artificial intelligence (XAI) framework by investigating user preferences across healthcare, retail, and energy domains. Through expert interviews and multi-stakeholder structured surveys, it empirically identifies “interpretability over accuracy” as a cross-domain preference and establishes feature importance and counterfactual explanations as the two foundational pillars of a universal XAI framework. Method: The approach integrates qualitative transcription analysis, questionnaire-driven requirement modeling, and genetic programming (GP) to construct inherently interpretable models. Contribution/Results: We propose the first empirically validated, unified XAI framework spanning multiple domains; release an open-source, standardized XAI questionnaire toolkit; demonstrate its feasibility across three core machine learning tasks—prediction, diagnosis, and prescription—and advance the XAI paradigm from technology-centric design toward human consensus–driven development.

Technology Category

Application Category

📝 Abstract
This study presents insights gathered from surveys and discussions with specialists in three domains, aiming to find essential elements for a universal explanation framework that could be applied to these and other similar use cases. The insights are incorporated into a software tool that utilizes GP algorithms, known for their interpretability. The applications analyzed include a medical scenario (involving predictive ML), a retail use case (involving prescriptive ML), and an energy use case (also involving predictive ML). We interviewed professionals from each sector, transcribing their conversations for further analysis. Additionally, experts and non-experts in these fields filled out questionnaires designed to probe various dimensions of explanatory methods. The findings indicate a universal preference for sacrificing a degree of accuracy in favor of greater explainability. Additionally, we highlight the significance of feature importance and counterfactual explanations as critical components of such a framework. Our questionnaires are publicly available to facilitate the dissemination of knowledge in the field of XAI.
Problem

Research questions and friction points this paper is trying to address.

Identifying universal explanation framework elements across domains
Balancing accuracy and explainability in ML applications
Prioritizing feature importance and counterfactuals in XAI systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Utilizes GP algorithms for interpretability
Incorporates feature importance and counterfactual explanations
Prioritizes explainability over accuracy universally
E
Eduard Barbu
Institute Of Computer Science, Tartu, Estonia
M
Marharytha Domnich
Institute Of Computer Science, Tartu, Estonia
Raul Vicente
Raul Vicente
Institute Of Computer Science, Tartu, Estonia
N
Nikos Sakkas
Apintech Ltd, POLIS-21 Group, Cyprus
A
André Morim
LTPlabs, Avenida da Senhora da Hora,459, Porto, Portugal