Logic Explanation of AI Classifiers by Categorical Explaining Functors

📅 2025-03-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing post-hoc XAI methods struggle to ensure logical consistency between explanations and the internal reasoning of black-box classifiers, often yielding unfaithful or logically contradictory explanations. Method: This paper introduces, for the first time, the categorical notion of a “functor” into explainable AI, proposing the *explanation functor* framework: it treats model input/output spaces as objects and logical entailment relations as morphisms, constructing structure-preserving explanation mappings. Contribution/Results: The framework theoretically guarantees that extracted logical rules strictly preserve logical entailment with respect to the model’s decision process—overcoming the fundamental fidelity limitations of heuristic approaches. Evaluated on synthetic benchmarks, our method significantly reduces both contradiction rate and unfaithfulness rate, empirically validating its theoretical rigor and robustness.

Technology Category

Application Category

📝 Abstract
The most common methods in explainable artificial intelligence are post-hoc techniques which identify the most relevant features used by pretrained opaque models. Some of the most advanced post hoc methods can generate explanations that account for the mutual interactions of input features in the form of logic rules. However, these methods frequently fail to guarantee the consistency of the extracted explanations with the model's underlying reasoning. To bridge this gap, we propose a theoretically grounded approach to ensure coherence and fidelity of the extracted explanations, moving beyond the limitations of current heuristic-based approaches. To this end, drawing from category theory, we introduce an explaining functor which structurally preserves logical entailment between the explanation and the opaque model's reasoning. As a proof of concept, we validate the proposed theoretical constructions on a synthetic benchmark verifying how the proposed approach significantly mitigates the generation of contradictory or unfaithful explanations.
Problem

Research questions and friction points this paper is trying to address.

Ensures consistency of explanations with model reasoning
Introduces explaining functor from category theory
Mitigates generation of contradictory or unfaithful explanations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Category theory-based explaining functor
Ensures logical entailment preservation
Mitigates contradictory or unfaithful explanations
🔎 Similar Papers
No similar papers found.