π€ AI Summary
This work addresses the fragmentation issue of counterfactual explanations in model ensembles. We propose the first unified counterfactual generation method explicitly designed for multi-model joint constraints. Our core contribution is the novel integration of entropy-based risk measures into counterfactual optimization, establishing a tunable risk-parameterized constraint framework that enables continuous trade-offs between *ensemble-wide validity* and *low feature modification cost*βreducing to worst-case robust solutions at extremal parameter values. To ensure transferability across constituent models, we combine gradient-driven search with ensemble feasibility modeling. Extensive evaluation on real-world datasets demonstrates that our approach reduces average feature modification cost by 18% compared to state-of-the-art baselines, while maintaining validity across 70%β100% of ensemble models. This significantly improves explanation consistency and practical utility in ensemble settings.
π Abstract
Counterfactual explanations indicate the smallest change in input that can translate to a different outcome for a machine learning model. Counterfactuals have generated immense interest in high-stakes applications such as finance, education, hiring, etc. In several use-cases, the decision-making process often relies on an ensemble of models rather than just one. Despite significant research on counterfactuals for one model, the problem of generating a single counterfactual explanation for an ensemble of models has received limited interest. Each individual model might lead to a different counterfactual, whereas trying to find a counterfactual accepted by all models might significantly increase cost (effort). We propose a novel strategy to find the counterfactual for an ensemble of models using the perspective of entropic risk measure. Entropic risk is a convex risk measure that satisfies several desirable properties. We incorporate our proposed risk measure into a novel constrained optimization to generate counterfactuals for ensembles that stay valid for several models. The main significance of our measure is that it provides a knob that allows for the generation of counterfactuals that stay valid under an adjustable fraction of the models. We also show that a limiting case of our entropic-risk-based strategy yields a counterfactual valid for all models in the ensemble (worst-case min-max approach). We study the trade-off between the cost (effort) for the counterfactual and its validity for an ensemble by varying degrees of risk aversion, as determined by our risk parameter knob. We validate our performance on real-world datasets.