🤖 AI Summary
To address the limitations of soft regression trees—namely, low predictive accuracy on complex functions and large-scale data, poor training efficiency, and insufficient stability—this paper proposes a novel variant: the Soft Multivariate Regression Tree (SRT). SRT employs a probabilistic, single-leaf activation routing mechanism to enable efficient conditional computation and decomposable optimization. We design a decomposition-based training algorithm integrating clustering-based initialization and heuristic sample reallocation, supported by theoretical convergence guarantees. Furthermore, we establish a general function approximation theory for SRT. Extensive experiments across 15 benchmark datasets demonstrate that SRT significantly outperforms existing soft tree methods (e.g., Blanquero et al.) in both prediction accuracy and robustness; it trains orders of magnitude faster than the Bertsimas–Dunn mixed-integer programming approach, achieves marginally higher average accuracy, and matches the overall performance of random forests.
📝 Abstract
Decision trees are widely used for classification and regression tasks in a variety of application fields due to their interpretability and good accuracy. During the past decade, growing attention has been devoted to globally optimized decision trees with deterministic or soft splitting rules at branch nodes, which are trained by optimizing the error function over all the tree parameters. In this work, we propose a new variant of soft multivariate regression trees (SRTs) where, for every input vector, the prediction is defined as the linear regression associated to a single leaf node, namely, the leaf node obtained by routing the input vector from the root along the branches with higher probability. SRTs exhibit the conditional computational property, i.e., each prediction depends on a small number of nodes (parameters), and our nonlinear optimization formulation for training them is amenable to decomposition. After showing a universal approximation result for SRTs, we present a decomposition training algorithm including a clustering-based initialization procedure and a heuristic for reassigning the input vectors along the tree. Under mild assumptions, we establish asymptotic convergence guarantees. Experiments on 15 wellknown datasets indicate that our SRTs and decomposition algorithm yield higher accuracy and robustness compared with traditional soft regression trees trained using the nonlinear optimization formulation of Blanquero et al., and a significant reduction in training times as well as a slightly better average accuracy compared with the mixed-integer optimization approach of Bertsimas and Dunn. We also report a comparison with the Random Forest ensemble method.