🤖 AI Summary
This work addresses the gap in counterfactual explanation (CE) generation for general integer optimization problems, systematically investigating how to produce human-interpretable and feasible decision explanations by perturbing constraints or objective function parameters. We first establish that CE generation is Σ₂^p-complete—thereby characterizing its intrinsic computational hardness—and then design exact algorithms for tractable subclasses, including parameterized knapsack problems. Our methodology integrates computational complexity analysis, integer programming modeling, and algorithmic construction. Empirical evaluation demonstrates that our approach computes optimal CEs within hours on knapsack instances with up to 40 items. The results bridge a critical theoretical and algorithmic gap in explainable integer optimization, offering a novel paradigm for high-assurance, interpretable decision support systems.
📝 Abstract
Counterfactual explanations (CEs) offer a human-understandable way to explain decisions by identifying specific changes to the input parameters of a base or present model that would lead to a desired change in the outcome. For optimization models, CEs have primarily been studied in limited contexts and little research has been done on CEs for general integer optimization problems. In this work, we address this gap. We first show that the general problem of constructing a CE is $Σ_2^p$-complete even for binary integer programs with just a single mutable constraint. Second, we propose solution algorithms for several of the most tractable special cases: (i) mutable objective parameters, (ii) a single mutable constraint, (iii) mutable right-hand-side, and (iv) all input parameters can be modified. We evaluate our approach using classical knapsack problem instances, focusing on cases with mutable constraint parameters. Our results show that our methods are capable of finding optimal CEs for small instances involving up to 40 items within a few hours.