Explaining Decisions in ML Models: a Parameterized Complexity Analysis

๐Ÿ“… 2024-07-22
๐Ÿ›๏ธ International Conference on Principles of Knowledge Representation and Reasoning
๐Ÿ“ˆ Citations: 2
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This study systematically characterizes the parameterized computational complexity of causal and contrastive explanation problems for transparent machine learning modelsโ€”including decision trees, decision lists, and Boolean circuits. Addressing local/global attribution and counterfactual explanation tasks, it establishes, for the first time within a unified framework, fixed-parameter tractability classifications across multiple model classes. Methodologically, the work integrates parameterized complexity theory, formal satisfiability analysis, structured model reduction, and combinatorial modeling to rigorously delineate the solvability boundaries of each explanation task. The results fill a critical theoretical gap in eXplainable AI (XAI), providing the first universal complexity benchmark and principled theoretical guidance for designing and evaluating interpretable algorithms.

Technology Category

Application Category

๐Ÿ“ Abstract
This paper presents a comprehensive theoretical investigation into the parameterized complexity of explanation problems in various machine learning (ML) models. Contrary to the prevalent black-box perception, our study focuses on models with transparent internal mechanisms. We address two principal types of explanation problems: abductive and contrastive, both in their local and global variants. Our analysis encompasses diverse ML models, including Decision Trees, Decision Sets, Decision Lists, Ordered Binary Decision Diagrams, Random Forests, and Boolean Circuits, and ensembles thereof, each offering unique explanatory challenges. This research fills a significant gap in explainable AI (XAI) by providing a foundational understanding of the complexities of generating explanations for these models. This work provides insights vital for further research in the domain of XAI, contributing to the broader discourse on the necessity of transparency and accountability in AI systems.
๐Ÿ”Ž Similar Papers
No similar papers found.