FAME: Formal Abstract Minimal Explanation for Neural Networks

📅 2026-03-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of scalable, formal, and minimally abductive explanations for large neural networks by proposing a novel approach that eliminates dependence on feature traversal order. Building upon the abstract interpretation framework, the method introduces a specialized perturbation domain and integrates LiRPA-based bounds to iteratively refine explanations, efficiently pruning irrelevant features until converging to a formally minimal explanation. To the best of our knowledge, this is the first technique capable of generating formal minimal explanations for large-scale networks. The authors also introduce a worst-case distance metric to evaluate explanation quality. Experimental results demonstrate that, compared to VERIX+, the proposed method significantly reduces explanation size and improves computational efficiency on medium- to large-scale networks.

Technology Category

Application Category

📝 Abstract
We propose FAME (Formal Abstract Minimal Explanations), a new class of abductive explanations grounded in abstract interpretation. FAME is the first method to scale to large neural networks while reducing explanation size. Our main contribution is the design of dedicated perturbation domains that eliminate the need for traversal order. FAME progressively shrinks these domains and leverages LiRPA-based bounds to discard irrelevant features, ultimately converging to a formal abstract minimal explanation. To assess explanation quality, we introduce a procedure that measures the worst-case distance between an abstract minimal explanation and a true minimal explanation. This procedure combines adversarial attacks with an optional VERIX+ refinement step. We benchmark FAME against VERIX+ and demonstrate consistent gains in both explanation size and runtime on medium- to large-scale neural networks.
Problem

Research questions and friction points this paper is trying to address.

neural networks
minimal explanation
abstract interpretation
formal explanation
explainability
Innovation

Methods, ideas, or system contributions that make the work stand out.

FAME
abstract interpretation
minimal explanation
LiRPA
neural network explainability
🔎 Similar Papers
No similar papers found.
R
Ryma Boumazouza
Airbus SAS, France; IRT Saint-Exupery, France
R
Raya Elsaleh
The Hebrew University of Jerusalem, Israel
M
Melanie Ducoffe
Airbus SAS, France; IRT Saint-Exupery, France
Shahaf Bassan
Shahaf Bassan
Hebrew University of Jerusalem
Explainable AIInterpretabilityML Theory
Guy Katz
Guy Katz
The Hebrew University of Jerusalem
VerificationSoftware Engineering