Higher-order Neural Additive Models: An Interpretable Machine Learning Model with Feature Interactions

πŸ“… 2022-09-30
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 11
✨ Influential: 1
πŸ“„ PDF
πŸ€– AI Summary
Existing neural additive models (NAMs) model only first-order feature effects, limiting their ability to capture higher-order interactions and thereby constraining both predictive performance and interpretability in real-world applications. To address this, we propose Higher-Order Neural Additive Models (HONAMs)β€”the first strictly interpretable framework supporting arbitrary-order feature interaction modeling. HONAMs extend the NAM architecture by introducing learnable high-order interaction terms, hierarchical regularization, and a gradient-guided interaction discovery mechanism. Crucially, they preserve modularity and single-feature visualizability while enabling scalable interaction order. Experiments across diverse real-world datasets demonstrate an average AUC improvement of 2.1% over baseline NAMs. Moreover, HONAMs enable quantitative measurement and visualization of interaction strengths. The implementation is publicly available.
πŸ“ Abstract
Neural Additive Models (NAMs) have recently demonstrated promising predictive performance while maintaining interpretability. However, their capacity is limited to capturing only first-order feature interactions, which restricts their effectiveness on real-world datasets. To address this limitation, we propose Higher-order Neural Additive Models (HONAMs), an interpretable machine learning model that effectively and efficiently captures feature interactions of arbitrary orders. HONAMs improve predictive accuracy without compromising interpretability, an essential requirement in high-stakes applications. This advantage of HONAM can help analyze and extract high-order interactions present in datasets. The source code for HONAM is publicly available at https://github.com/gim4855744/HONAM/.
Problem

Research questions and friction points this paper is trying to address.

Captures arbitrary-order feature interactions in interpretable models
Improves predictive accuracy while maintaining model interpretability
Addresses limitations of first-order interaction models on real datasets
Innovation

Methods, ideas, or system contributions that make the work stand out.

HONAMs capture arbitrary-order feature interactions
Model maintains interpretability while improving accuracy
Enables analysis of high-order interactions in datasets
πŸ”Ž Similar Papers
No similar papers found.
M
Minkyu Kim
Ziovision Co., Ltd.,Republic of Korea
Hyunjin Choi
Hyunjin Choi
Samsung SDS, KAIST
Artificial IntelligenceMachine LearningComputational Linguistics
J
Jinho Kim
Kangwon National University,Republic of Korea