P2C: Path to Counterfactuals

📅 2025-08-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In high-stakes decision-making scenarios, existing counterfactual explanation methods produce infeasible recommendations by neglecting causal dependencies among features and the temporal ordering of actions. To address this, we propose P2C—a novel framework that jointly models causal structure via causal graphs and performs ordered intervention planning, enabling the generation of executable, causally consistent counterfactual paths: each intervention is grounded in the current causal state, and only user-initiated actions incur cost, thereby improving cost estimation fidelity. P2C employs goal-directed Answer Set Programming (specifically, s(CASP)) to jointly perform causal reasoning and constrained sequential path optimization. Experiments demonstrate that P2C significantly outperforms conventional planners lacking causal modeling—effectively avoiding invalid operations and generating more realistic, implementable action sequences—thereby enhancing the practical utility and deployability of counterfactual explanations.

Technology Category

Application Category

📝 Abstract
Machine-learning models are increasingly driving decisions in high-stakes settings, such as finance, law, and hiring, thus, highlighting the need for transparency. However, the key challenge is to balance transparency -- clarifying `why' a decision was made -- with recourse: providing actionable steps on `how' to achieve a favourable outcome from an unfavourable outcome. Counterfactual explanations reveal `why' an undesired outcome occurred and `how' to reverse it through targeted feature changes (interventions). Current counterfactual approaches have limitations: 1) they often ignore causal dependencies between features, and 2) they typically assume all interventions can happen simultaneously, an unrealistic assumption in practical scenarios where actions are typically taken in a sequence. As a result, these counterfactuals are often not achievable in the real world. We present P2C (Path-to-Counterfactuals), a model-agnostic framework that produces a plan (ordered sequence of actions) converting an unfavourable outcome to a causally consistent favourable outcome. P2C addresses both limitations by 1) Explicitly modelling causal relationships between features and 2) Ensuring that each intermediate state in the plan is feasible and causally valid. P2C uses the goal-directed Answer Set Programming system s(CASP) to generate the plan accounting for feature changes that happen automatically due to causal dependencies. Furthermore, P2C refines cost (effort) computation by only counting changes actively made by the user, resulting in realistic cost estimates. Finally, P2C highlights how its causal planner outperforms standard planners, which lack causal knowledge and thus can generate illegal actions.
Problem

Research questions and friction points this paper is trying to address.

Generating causally consistent counterfactual explanations for ML decisions
Providing sequential actionable plans instead of simultaneous interventions
Ensuring feasibility and validity of intermediate states in recourse paths
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generates ordered action sequences for counterfactuals
Models causal dependencies between features explicitly
Uses Answer Set Programming for feasible intermediate states
🔎 Similar Papers
No similar papers found.
S
Sopam Dasgupta
The University of Texas at Dallas, Richardson TX 75080, USA
S
Sadaf MD Halim
The University of Texas at Dallas, Richardson TX 75080, USA
J
Joaquín Arias
CETINIA, Universidad Rey Juan Carlos, Madrid, Spain
E
Elmer Salazar
The University of Texas at Dallas, Richardson TX 75080, USA
Gopal Gupta
Gopal Gupta
Professor of Computer Science, The University of Texas at Dallas
Programming languagesLogic ProgrammingArtificial Intelligence