🤖 AI Summary
Neural networks achieve high predictive accuracy but suffer from poor interpretability due to their “black-box” nature. Method: We propose neuralGAM—the first R implementation of an architecture-agnostic Generalized Additive Model–Neural Network (GAM-NN) framework. It decouples input features, constructs differentiable subnetworks—each dedicated to a single feature—and aggregates outputs via an additive structure, thereby preserving the expressive power of deep learning while inheriting the transparency of GAMs. The method supports arbitrary MLP architectures, enables end-to-end joint optimization, and provides intuitive visualization of feature effects. Contribution/Results: Evaluated on synthetic data and multiple real-world benchmark tasks, neuralGAM achieves predictive performance on par with standard MLPs while substantially enhancing model interpretability. It effectively breaks the traditional trade-off between accuracy and explainability in deep learning models.
📝 Abstract
Nowadays, Neural Networks are considered one of the most effective methods for various tasks such as anomaly detection, computer-aided disease detection, or natural language processing. However, these networks suffer from the ``black-box'' problem which makes it difficult to understand how they make decisions. In order to solve this issue, an R package called neuralGAM is introduced. This package implements a Neural Network topology based on Generalized Additive Models, allowing to fit an independent Neural Network to estimate the contribution of each feature to the output variable, yielding a highly accurate and interpretable Deep Learning model. The neuralGAM package provides a flexible framework for training Generalized Additive Neural Networks, which does not impose any restrictions on the Neural Network architecture. We illustrate the use of the neuralGAM package in both synthetic and real data examples.