FGCE: Feasible Group Counterfactual Explanations for Auditing Fairness

📅 2024-10-29
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing counterfactual explanations for group fairness auditing suffer from limited real-world feasibility, insufficient subgroup interpretability, and inadequate modeling of multi-objective trade-offs. Method: We propose the first graph-structured, group-level feasible counterfactual framework: (i) a graph neural network–driven feasibility-aware counterfactual generator that explicitly encodes real-world constraints; (ii) subgroup clustering to identify protected groups sharing similar counterfactual pathways; and (iii) multi-objective optimization balancing explanation count, intervention cost, and coverage breadth. Contributions/Results: (1) First formal definition and implementation of group-level feasible counterfactuals; (2) Fairness-auditing–specific evaluation metrics quantifying diverse bias patterns; (3) Significant improvements on benchmark datasets—+23.6% feasibility preservation rate and +31.2% explanation coverage—demonstrating the framework’s effectiveness and practicality for trustworthy ML auditing.

Technology Category

Application Category

📝 Abstract
This paper introduces the first graph-based framework for generating group counterfactual explanations to audit model fairness, a crucial aspect of trustworthy machine learning. Counterfactual explanations are instrumental in understanding and mitigating unfairness by revealing how inputs should change to achieve a desired outcome. Our framework, named Feasible Group Counterfactual Explanations (FGCEs), captures real-world feasibility constraints and constructs subgroups with similar counterfactuals, setting it apart from existing methods. It also addresses key trade-offs in counterfactual generation, including the balance between the number of counterfactuals, their associated costs, and the breadth of coverage achieved. To evaluate these trade-offs and assess fairness, we propose measures tailored to group counterfactual generation. Our experimental results on benchmark datasets demonstrate the effectiveness of our approach in managing feasibility constraints and trade-offs, as well as the potential of our proposed metrics in identifying and quantifying fairness issues.
Problem

Research questions and friction points this paper is trying to address.

Generating feasible group counterfactual explanations for fairness auditing
Modeling real-world constraints in counterfactual generation process
Developing metrics to quantify group and subgroup fairness disparities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Graph-based framework for group counterfactual explanations
Models real-world feasibility constraints and trade-offs
Introduces novel fairness metrics for subgroup analysis
🔎 Similar Papers
No similar papers found.
C
Christos Fragkathoulas
University of Ioannina, Archimedes / Athena RC, Greece
V
Vasiliki Papanikou
University of Ioannina, Archimedes / Athena RC, Greece
E
E. Pitoura
University of Ioannina, Archimedes / Athena RC, Greece
Evimaria Terzi
Evimaria Terzi
Professor of Computer Science, Boston University
Data MiningAlgorithms