🤖 AI Summary
In high-stakes decision-making scenarios, existing counterfactual explanation methods produce infeasible recommendations by neglecting causal dependencies among features and the temporal ordering of actions. To address this, we propose P2C—a novel framework that jointly models causal structure via causal graphs and performs ordered intervention planning, enabling the generation of executable, causally consistent counterfactual paths: each intervention is grounded in the current causal state, and only user-initiated actions incur cost, thereby improving cost estimation fidelity. P2C employs goal-directed Answer Set Programming (specifically, s(CASP)) to jointly perform causal reasoning and constrained sequential path optimization. Experiments demonstrate that P2C significantly outperforms conventional planners lacking causal modeling—effectively avoiding invalid operations and generating more realistic, implementable action sequences—thereby enhancing the practical utility and deployability of counterfactual explanations.
📝 Abstract
Machine-learning models are increasingly driving decisions in high-stakes settings, such as finance, law, and hiring, thus, highlighting the need for transparency. However, the key challenge is to balance transparency -- clarifying `why' a decision was made -- with recourse: providing actionable steps on `how' to achieve a favourable outcome from an unfavourable outcome. Counterfactual explanations reveal `why' an undesired outcome occurred and `how' to reverse it through targeted feature changes (interventions).
Current counterfactual approaches have limitations: 1) they often ignore causal dependencies between features, and 2) they typically assume all interventions can happen simultaneously, an unrealistic assumption in practical scenarios where actions are typically taken in a sequence. As a result, these counterfactuals are often not achievable in the real world.
We present P2C (Path-to-Counterfactuals), a model-agnostic framework that produces a plan (ordered sequence of actions) converting an unfavourable outcome to a causally consistent favourable outcome. P2C addresses both limitations by 1) Explicitly modelling causal relationships between features and 2) Ensuring that each intermediate state in the plan is feasible and causally valid. P2C uses the goal-directed Answer Set Programming system s(CASP) to generate the plan accounting for feature changes that happen automatically due to causal dependencies. Furthermore, P2C refines cost (effort) computation by only counting changes actively made by the user, resulting in realistic cost estimates. Finally, P2C highlights how its causal planner outperforms standard planners, which lack causal knowledge and thus can generate illegal actions.