Ranking Counterfactual Explanations

📅 2025-03-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the counterfactual explanation problem in AI decision-making—specifically, “Why not another outcome?”—by proposing the first rigorously formalized counterfactual explanation framework. Departing from conventional minimality constraints, it establishes a multidimensional ranking criterion grounded in logical reasoning and optimization theory. The method is empirically validated on 12 real-world datasets and evaluated using three novel metrics for explanation quality. Key contributions are: (1) a formal definition of counterfactual explanations, characterization of their theoretical properties, and clarification of their relationship to factual explanations; (2) an optimal ranking mechanism that jointly optimizes representativeness, generalizability, and robustness; and (3) deterministic identification of the unique optimal explanation in most settings—outperforming stochastic minimal counterfactuals in both representativeness and coverage, thereby demonstrating strong efficacy and practicality.

Technology Category

Application Category

📝 Abstract
AI-driven outcomes can be challenging for end-users to understand. Explanations can address two key questions:"Why this outcome?"(factual) and"Why not another?"(counterfactual). While substantial efforts have been made to formalize factual explanations, a precise and comprehensive study of counterfactual explanations is still lacking. This paper proposes a formal definition of counterfactual explanations, proving some properties they satisfy, and examining the relationship with factual explanations. Given that multiple counterfactual explanations generally exist for a specific case, we also introduce a rigorous method to rank these counterfactual explanations, going beyond a simple minimality condition, and to identify the optimal ones. Our experiments with 12 real-world datasets highlight that, in most cases, a single optimal counterfactual explanation emerges. We also demonstrate, via three metrics, that the selected optimal explanation exhibits higher representativeness and can explain a broader range of elements than a random minimal counterfactual. This result highlights the effectiveness of our approach in identifying more robust and comprehensive counterfactual explanations.
Problem

Research questions and friction points this paper is trying to address.

Formalizing counterfactual explanations for AI outcomes
Ranking multiple counterfactual explanations effectively
Identifying optimal counterfactual explanations for robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Formal definition of counterfactual explanations
Rigorous ranking method for counterfactuals
Optimal counterfactual identification via metrics
🔎 Similar Papers
No similar papers found.
S
Suryani Lim
Federation University, Australia
Henri Prade
Henri Prade
CNRS, France and University of New South Wales
Artificial IntelligenceDecision Making
G
Gilles Richard
IRIT-CNRS Toulouse, France