Additive Models Explained: A Computational Complexity Approach

📅 2025-10-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work systematically investigates the computational complexity of explanation generation for generalized additive models (GAMs). Method: Leveraging computational complexity theory—including standard assumptions such as P ≠ NP—we rigorously characterize the solvability boundaries of GAM explanations across diverse input space structures, component model types (e.g., splines, trees, neural networks), and regression versus classification tasks. Contribution/Results: We establish that explanation hardness critically depends on both input structure and task type. Notably, neural additive models admit polynomial-time explainability under specific configurations—challenging the prevailing belief that interpretability necessarily entails computational inefficiency. This work introduces the first formal theoretical framework for explanation feasibility in GAMs, precisely identifying necessary and sufficient conditions for efficient explanation. It thus provides foundational theoretical support and actionable guidance for interpretable AI.

Technology Category

Application Category

📝 Abstract
Generalized Additive Models (GAMs) are commonly considered *interpretable* within the ML community, as their structure makes the relationship between inputs and outputs relatively understandable. Therefore, it may seem natural to hypothesize that obtaining meaningful explanations for GAMs could be performed efficiently and would not be computationally infeasible. In this work, we challenge this hypothesis by analyzing the *computational complexity* of generating different explanations for various forms of GAMs across multiple contexts. Our analysis reveals a surprisingly diverse landscape of both positive and negative complexity outcomes. Particularly, under standard complexity assumptions such as P!=NP, we establish several key findings: (1) in stark contrast to many other common ML models, the complexity of generating explanations for GAMs is heavily influenced by the structure of the input space; (2) the complexity of explaining GAMs varies significantly with the types of component models used - but interestingly, these differences only emerge under specific input domain settings; (3) significant complexity distinctions appear for obtaining explanations in regression tasks versus classification tasks in GAMs; and (4) expressing complex models like neural networks additively (e.g., as neural additive models) can make them easier to explain, though interestingly, this benefit appears only for certain explanation methods and input domains. Collectively, these results shed light on the feasibility of computing diverse explanations for GAMs, offering a rigorous theoretical picture of the conditions under which such computations are possible or provably hard.
Problem

Research questions and friction points this paper is trying to address.

Analyzing computational complexity of generating explanations for GAMs
Investigating how input space structure affects explanation complexity
Comparing explanation complexity between regression and classification GAMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analyzes computational complexity of GAM explanations
Reveals complexity varies by input space structure
Shows additive neural networks ease certain explanations
🔎 Similar Papers