🤖 AI Summary
Mixed-integer linear programming (MILP) solutions often lack interpretability, hindering user trust and decision-making transparency.
Method: This paper introduces X-MILP, the first approach to integrate constraint reasoning with graphical contrastive explanation for MILP. It models user queries as additional constraints, identifies critical infeasibility sources via irreducible inconsistent subsystems (IIS), and constructs a structured “reasoning graph” that explicitly encodes causal and dependency relations among constraints.
Contribution/Results: X-MILP is domain-agnostic and requires no domain-specific knowledge, enabling structured, contrastive explanations across diverse problem instances. Experiments on canonical MILP benchmarks demonstrate that X-MILP efficiently generates semantically clear, traceable graphical explanations. These explanations significantly enhance solution transparency, user comprehension, and decision-making credibility—thereby advancing explainable optimization.
📝 Abstract
Following the recent push for trustworthy AI, there has been an increasing interest in developing contrastive explanation techniques for optimisation, especially concerning the solution of specific decision-making processes formalised as MILPs. Along these lines, we propose X-MILP, a domain-agnostic approach for building contrastive explanations for MILPs based on constraint reasoning techniques. First, we show how to encode the queries a user makes about the solution of an MILP problem as additional constraints. Then, we determine the reasons that constitute the answer to the user's query by computing the Irreducible Infeasible Subsystem (IIS) of the newly obtained set of constraints. Finally, we represent our explanation as a "graph of reasons" constructed from the IIS, which helps the user understand the structure among the reasons that answer their query. We test our method on instances of well-known optimisation problems to evaluate the empirical hardness of computing explanations.