🤖 AI Summary
This study investigates the computational efficiency of counterfactual and semi-factual explanations in explainable artificial intelligence (XAI). By leveraging formal modeling and computational complexity theory, it establishes—for the first time—that such explanations are not only intractable to compute exactly in most settings but also provably hard to approximate efficiently under standard complexity-theoretic assumptions. The work derives lower bounds on the approximability of counterfactual and semi-factual explanations, thereby uncovering their inherent computational hardness. These theoretical results provide a rigorous foundation for guiding the design of algorithms and informing policy decisions in XAI systems, highlighting fundamental limitations that must be accounted for in practical implementations.
📝 Abstract
Providing clear explanations to the choices of machine learning models is essential for these models to be deployed in crucial applications. Counterfactual and semi-factual explanations have emerged as two mechanisms for providing users with insights into the outputs of their models. We provide an overview of the computational complexity results in the literature for generating these explanations, finding that in many cases, generating explanations is computationally hard. We strengthen the argument for this considerably by further contributing our own inapproximability results showing that not only are explanations often hard to generate, but under certain assumptions, they are also hard to approximate. We discuss the implications of these complexity results for the XAI community and for policymakers seeking to regulate explanations in AI.