Generative Example-Based Explanations: Bridging the Gap between Generative Modeling and Explainability

📅 2024-10-28
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Existing exemplar-based image explanation methods leveraging deep generative models deliver visually compelling outputs but suffer from a significant conceptual and evaluative disconnect with the broader eXplainable AI (XAI) community. Method: We propose the first unified probabilistic framework that formally encodes classical explanation desiderata—faithfulness, locality, and succinctness—as generative modeling objectives, realized through Bayesian inference and feature-semantic alignment to yield semantically interpretable and theoretically verifiable local exemplar explanations in high-dimensional spaces. Our approach is architecture-agnostic, seamlessly integrating with mainstream generative models including VAEs and GANs. Contribution/Results: The framework systematically bridges the conceptual gap between generative explanations and conventional XAI paradigms. Empirically, it achieves substantial improvements in explanation fidelity, stability, and human interpretability across image and multimodal tasks, outperforming state-of-the-art methods on multiple quantitative metrics and user studies.

Technology Category

Application Category

📝 Abstract
Recently, several methods have leveraged deep generative modeling to produce example-based explanations of decision algorithms for high-dimensional input data. Despite promising results, a disconnect exists between these methods and the classical explainability literature, which focuses on lower-dimensional data with semantically meaningful features. This conceptual and communication gap leads to misunderstandings and misalignments in goals and expectations. In this paper, we bridge this gap by proposing a novel probabilistic framework for local example-based explanations. Our framework integrates the critical characteristics of classical local explanation desiderata while being amenable to high-dimensional data and their modeling through deep generative models. Our aim is to facilitate communication, foster rigor and transparency, and improve the quality of peer discussion and research progress.
Problem

Research questions and friction points this paper is trying to address.

Bridging generative modeling and explainability research gap
Formally defining probabilistic example-based explanation framework
Aligning generative methods with explainability desiderata and characteristics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Probabilistic framework for example-based explanations
Formally defining explanations via generative models
Aligning generative methods with explainability desiderata
🔎 Similar Papers
No similar papers found.
Philipp Vaeth
Philipp Vaeth
Technical University of Applied Sciences Würzburg-Schweinfurt
Deep LearningExplainable AIXAIGenerative ModelingDiffusion Models
A
Alexander M. Fruehwald
Center for Artificial Intelligence (CAIRO), Technical University of Applied Sciences Würzburg-Schweinfurt, Franz-Horn-Straße 2, Würzburg, Germany
Benjamin Paassen
Benjamin Paassen
Bielefeld University
Educational Data MiningStructured DataMachine LearningNeural NetworksMetric Learning
M
Magda Gregorova
Center for Artificial Intelligence (CAIRO), Technical University of Applied Sciences Würzburg-Schweinfurt, Franz-Horn-Straße 2, Würzburg, Germany