🤖 AI Summary
This work addresses the limitations of existing explainable AI methods in generating counterfactual explanations that simultaneously satisfy feasibility, plausibility, and diversity, particularly beyond binary classification settings. The authors propose a novel gradient-based optimization framework that explicitly integrates counterfactual generation with feature attribution to jointly optimize these three properties in multi-class scenarios for the first time. By synergistically combining counterfactual reasoning and feature importance analysis, the method not only overcomes the class-restriction barrier of conventional approaches but also significantly outperforms state-of-the-art baselines—including Wachter, DiCE, CARE, and SHAP—across multiple evaluation metrics such as validity, proximity, sparsity, plausibility, and diversity. Moreover, it accurately identifies influential features, yielding high-quality local explanations.
📝 Abstract
Explainable Artificial Intelligence (XAI) is increasingly essential as AI systems are deployed in critical fields such as healthcare and finance, offering transparency into AI-driven decisions. Two major XAI paradigms, counterfactual explanations (CFX) and feature attribution (FA), serve distinct roles in model interpretability. This study introduces GradCFA, a hybrid framework combining CFX and FA to improve interpretability by explicitly optimizing feasibility, plausibility, and diversity - key qualities often unbalanced in existing methods. Unlike most CFX research focused on binary classification, GradCFA extends to multi-class scenarios, supporting a wider range of applications. We evaluate GradCFA's validity, proximity, sparsity, plausibility, and diversity against state-of-the-art methods, including Wachter, DiCE, CARE for CFX, and SHAP for FA. Results show GradCFA effectively generates feasible, plausible, and diverse counterfactuals while offering valuable FA insights. By identifying influential features and validating their impact, GradCFA advances AI interpretability. The code for implementation of this work can be found at: https://github.com/jacob-ws/GradCFs .