Neural-ANOVA: Model Decomposition for Interpretable Machine Learning

📅 2024-08-22
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Quantifying interaction effects in deep neural networks (DNNs) remains a fundamental challenge for model interpretability. Method: This paper introduces the first fast, closed-form ANOVA decomposition method applicable to arbitrary DNN architectures. By establishing a differentiable model decomposition framework—integrating ANOVA functional decomposition, closed-form subspace integration, and gradient-assisted parametric learning—it enables precise analytical extraction and visualization of global, multi-order input interactions. Contribution/Results: Evaluated on multiple benchmark datasets, the method achieves an average reconstruction error < 0.02 and significantly improves high-order interaction detection accuracy. It combines theoretical rigor with computational efficiency (linear scalability in network depth), enabling practical applications in model diagnosis, trustworthiness verification, and decision attribution. The approach establishes a novel paradigm for interpretable analysis of black-box DNNs.

Technology Category

Application Category

📝 Abstract
The analysis of variance (ANOVA) decomposition offers a systematic method to understand the interaction effects that contribute to a specific decision output. In this paper we introduce Neural-ANOVA, an approach to decompose neural networks into glassbox models using the ANOVA decomposition. Our approach formulates a learning problem, which enables rapid and closed-form evaluation of integrals over subspaces that appear in the calculation of the ANOVA decomposition. Finally, we conduct numerical experiments to illustrate the advantages of enhanced interpretability and model validation by a decomposition of the learned interaction effects.
Problem

Research questions and friction points this paper is trying to address.

Decompose neural networks into lower-order models
Enable fast analytical evaluation of integrals
Compare approximation properties with regression approaches
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decompose neural networks via functional ANOVA
Fast analytical evaluation of subspace integrals
Numerical experiments compare approximation properties
🔎 Similar Papers
No similar papers found.