π€ AI Summary
Existing neural additive models (NAMs) model only first-order feature effects, limiting their ability to capture higher-order interactions and thereby constraining both predictive performance and interpretability in real-world applications. To address this, we propose Higher-Order Neural Additive Models (HONAMs)βthe first strictly interpretable framework supporting arbitrary-order feature interaction modeling. HONAMs extend the NAM architecture by introducing learnable high-order interaction terms, hierarchical regularization, and a gradient-guided interaction discovery mechanism. Crucially, they preserve modularity and single-feature visualizability while enabling scalable interaction order. Experiments across diverse real-world datasets demonstrate an average AUC improvement of 2.1% over baseline NAMs. Moreover, HONAMs enable quantitative measurement and visualization of interaction strengths. The implementation is publicly available.
π Abstract
Neural Additive Models (NAMs) have recently demonstrated promising predictive performance while maintaining interpretability. However, their capacity is limited to capturing only first-order feature interactions, which restricts their effectiveness on real-world datasets. To address this limitation, we propose Higher-order Neural Additive Models (HONAMs), an interpretable machine learning model that effectively and efficiently captures feature interactions of arbitrary orders. HONAMs improve predictive accuracy without compromising interpretability, an essential requirement in high-stakes applications. This advantage of HONAM can help analyze and extract high-order interactions present in datasets. The source code for HONAM is publicly available at https://github.com/gim4855744/HONAM/.