🤖 AI Summary
To address the low training efficiency of additive interpretable models and the trade-off between interpretability and predictive accuracy, this paper proposes an efficient and transparent piecewise-constant additive modeling approach. Methodologically: (i) we design a gradient-free fast piecewise optimization algorithm to circumvent the iterative bottlenecks inherent in gradient-based methods; (ii) we introduce a feature-importance-driven sparse selection mechanism that significantly reduces model complexity without sacrificing performance; and (iii) we strictly enforce the additive structural constraint to guarantee global interpretability. Experiments on multiple benchmark datasets demonstrate that our method achieves approximately 100× faster training than the state-of-the-art Explainable Boosting Machine (EBM), while maintaining or even surpassing its predictive accuracy. Moreover, the model inherently supports natural visualization of per-feature contributions. Thus, our approach unifies high efficiency, strong transparency, and high accuracy in additive modeling.
📝 Abstract
We present FAST, an optimization framework for fast additive segmentation. FAST segments piecewise constant shape functions for each feature in a dataset to produce transparent additive models. The framework leverages a novel optimization procedure to fit these models ~2 orders of magnitude faster than existing state-of-the-art methods, such as explainable boosting machines[20]. We also develop new feature selection algorithms in the FAST framework to fit parsimonious models that perform well. Through experiments and case studies, we show that FAST improves the computational efficiency and interpretability of additive models.