🤖 AI Summary
Dynamic updates of machine learning models frequently invalidate historical counterfactual explanations (recourse), undermining user actionability. To address this, we propose a Learning-Augmented Robust Explainable Decision Framework—the first to integrate learning-augmentation into recourse design. Our method forecasts model evolution trends to jointly optimize consistency (minimizing adjustment cost under accurate predictions) and robustness (bounding worst-case cost increase under prediction errors). We formally characterize the consistency–robustness trade-off, derive theoretical bounds linking prediction error to cost inflation, and unify robust optimization, online learning-based calibration, and minimum-cost counterfactual generation into a two-stage algorithm with provable performance guarantees. Experiments show that when prediction accuracy exceeds 80%, our framework reduces average recourse cost by 37% compared to baselines, while worst-case cost growth remains tightly aligned with theoretical upper bounds—significantly outperforming purely robust approaches.
📝 Abstract
The widespread use of machine learning models in high-stakes domains can have a major negative impact, especially on individuals who receive undesirable outcomes. Algorithmic recourse provides such individuals with suggestions of minimum-cost improvements they can make to achieve a desirable outcome in the future. However, machine learning models often get updated over time and this can cause a recourse to become invalid (i.e., not lead to the desirable outcome). The robust recourse literature aims to choose recourses that are less sensitive, even against adversarial model changes, but this comes at a higher cost. To overcome this obstacle, we initiate the study of algorithmic recourse through the learning-augmented framework and evaluate the extent to which a designer equipped with a prediction regarding future model changes can reduce the cost of recourse when the prediction is accurate (consistency) while also limiting the cost even when the prediction is inaccurate (robustness). We propose a novel algorithm for this problem, study the robustness-consistency trade-off, and analyze how prediction accuracy affects performance.