🤖 AI Summary
This paper addresses the high bias of standard random forests and the unstable performance of Lasso-weighted variants. We propose a novel framework integrating adaptive Lasso-based variable selection with Bootstrap-weighted ensemble learning. Our method employs theoretically grounded weight construction to achieve strict dominance in the bias–variance trade-off: it uniformly outperforms both standard random forests and Lasso-weighted ensembles across all signal-to-noise ratios. We derive a tighter upper bound on prediction risk and elucidate the mechanism of improvement via bias–variance decomposition. Extensive experiments—including multiple simulation settings and real-world datasets—demonstrate that the proposed method significantly reduces prediction error, enhances model stability, and improves interpretability (e.g., yielding more robust variable importance measures), thereby validating its generality and superiority.
📝 Abstract
Random forests are a statistical learning technique that use bootstrap aggregation to average high-variance and low-bias trees. Improvements to random forests, such as applying Lasso regression to the tree predictions, have been proposed in order to reduce model bias. However, these changes can sometimes degrade performance (e.g., an increase in mean squared error). In this paper, we show in theory that the relative performance of these two methods, standard and Lasso-weighted random forests, depends on the signal-to-noise ratio. We further propose a unified framework to combine random forests and Lasso selection by applying adaptive weighting and show mathematically that it can strictly outperform the other two methods. We compare the three methods through simulation, including bias-variance decomposition, error estimates evaluation, and variable importance analysis. We also show the versatility of our method by applications to a variety of real-world datasets.