From Explainable to Explained AI: Ideas for Falsifying and Quantifying Explanations

📅 2025-08-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
AI models in medical imaging frequently rely on spurious features, compromising generalizability, fairness, and biological interpretability. Method: We propose the “Explainable-by-Design AI” paradigm—a human-in-the-loop vision-language modeling system for verifiable explanations in computational pathology classification. Our approach introduces the first explanation verification framework grounded in falsifiability testing and predictive quantification—overcoming inherent limitations of saliency maps. It combines sliding-window patch extraction with multi-instance learning for whole-slide image analysis, integrates general-purpose vision-language models for semantic quantification of explanations, and implements an AI-augmented digital slide viewer. Results: Experiments demonstrate qualitative validation and quantitative discrimination among competing explanatory hypotheses. The system establishes the first generalizable, closed-loop explanation framework in digital pathology. Code and prompt templates are publicly released.

Technology Category

Application Category

📝 Abstract
Explaining deep learning models is essential for clinical integration of medical image analysis systems. A good explanation highlights if a model depends on spurious features that undermines generalization and harms a subset of patients or, conversely, may present novel biological insights. Although techniques like GradCAM can identify influential features, they are measurement tools that do not themselves form an explanation. We propose a human-machine-VLM interaction system tailored to explaining classifiers in computational pathology, including multi-instance learning for whole-slide images. Our proof of concept comprises (1) an AI-integrated slide viewer to run sliding-window experiments to test claims of an explanation, and (2) quantification of an explanation's predictiveness using general-purpose vision-language models. The results demonstrate that this allows us to qualitatively test claims of explanations and can quantifiably distinguish competing explanations. This offers a practical path from explainable AI to explained AI in digital pathology and beyond. Code and prompts are available at https://github.com/nki-ai/x2x.
Problem

Research questions and friction points this paper is trying to address.

Explaining deep learning models for clinical medical image integration
Identifying spurious features affecting model generalization in pathology
Quantifying and testing AI explanations using vision-language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Human-machine-VLM interaction for pathology explanations
AI-integrated slide viewer for testing explanations
Quantify explanation predictiveness using vision-language models
🔎 Similar Papers
No similar papers found.