MC3G: Model Agnostic Causally Constrained Counterfactual Generation

📅 2025-08-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In high-stakes decision-making, there exists a fundamental tension among model transparency, algorithmic privacy preservation, and the operational feasibility of counterfactual explanations. Method: We propose a model-agnostic, causally constrained counterfactual generation framework. It first constructs an interpretable rule-based surrogate model to approximate the black-box predictor; then incorporates a causal graph to identify critical feature dependencies; and finally explicitly models and optimizes user effort cost during counterfactual search. Contribution/Results: Unlike existing approaches, our framework generates counterfactuals that are more realistic, fair, and low-effort—while preserving the intellectual property of proprietary models. It achieves significant improvements over baseline methods in interpretability, actionability, and computational efficiency. Extensive experiments on multiple real-world datasets validate its effectiveness.

Technology Category

Application Category

📝 Abstract
Machine learning models increasingly influence decisions in high-stakes settings such as finance, law and hiring, driving the need for transparent, interpretable outcomes. However, while explainable approaches can help understand the decisions being made, they may inadvertently reveal the underlying proprietary algorithm: an undesirable outcome for many practitioners. Consequently, it is crucial to balance meaningful transparency with a form of recourse that clarifies why a decision was made and offers actionable steps following which a favorable outcome can be obtained. Counterfactual explanations offer a powerful mechanism to address this need by showing how specific input changes lead to a more favorable prediction. We propose Model-Agnostic Causally Constrained Counterfactual Generation (MC3G), a novel framework that tackles limitations in the existing counterfactual methods. First, MC3G is model-agnostic: it approximates any black-box model using an explainable rule-based surrogate model. Second, this surrogate is used to generate counterfactuals that produce a favourable outcome for the original underlying black box model. Third, MC3G refines cost computation by excluding the ``effort" associated with feature changes that occur automatically due to causal dependencies. By focusing only on user-initiated changes, MC3G provides a more realistic and fair representation of the effort needed to achieve a favourable outcome. We show that MC3G delivers more interpretable and actionable counterfactual recommendations compared to existing techniques all while having a lower cost. Our findings highlight MC3G's potential to enhance transparency, accountability, and practical utility in decision-making processes that incorporate machine-learning approaches.
Problem

Research questions and friction points this paper is trying to address.

Generates counterfactual explanations for black-box model decisions
Balances transparency with algorithm protection in high-stakes domains
Computes effort costs only for user-initiated feature changes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Model-agnostic rule-based surrogate approximation
Causally constrained counterfactual generation
User-initiated effort cost computation
🔎 Similar Papers
S
Sopam Dasgupta
The University of Texas at Dallas
S
Sadaf MD Halim
The University of Texas at Dallas
J
Joaquín Arias
CETINIA, Universidad Rey Juan Carlos
E
Elmer Salazar
The University of Texas at Dallas
Gopal Gupta
Gopal Gupta
Professor of Computer Science, The University of Texas at Dallas
Programming languagesLogic ProgrammingArtificial Intelligence