🤖 AI Summary
Traditional ensemble methods (e.g., Random Forest, XGBoost) assign uniform weights to all CART trees, ignoring their input-dependent discriminative capabilities—thereby limiting classification performance. This paper proposes Adaptive Forest (AF), an ensemble framework that assigns context-aware, non-uniform weights to individual trees via an input-dependent dynamic weighting mechanism. Its core innovation lies in jointly leveraging Optimal Prediction Policy Trees (OP²T) and Mixed-Integer Optimization (MIO) to achieve both interpretability and global optimality in weight assignment. Evaluated on over 20 real-world datasets across binary and multi-class classification tasks, AF consistently outperforms standard baselines and state-of-the-art weighted ensemble methods. Empirical results demonstrate that input-adaptive weighting significantly enhances generalization performance, validating the efficacy of dynamically calibrated tree contributions in ensemble learning.
📝 Abstract
Random Forests (RF) and Extreme Gradient Boosting (XGBoost) are two of the most widely used and highly performing classification and regression models. They aggregate equally weighted CART trees, generated randomly in RF or sequentially in XGBoost. In this paper, we propose Adaptive Forests (AF), a novel approach that adaptively selects the weights of the underlying CART models. AF combines (a) the Optimal Predictive-Policy Trees (OP2T) framework to prescribe tailored, input-dependent unequal weights to trees and (b) Mixed Integer Optimization (MIO) to refine weight candidates dynamically, enhancing overall performance. We demonstrate that AF consistently outperforms RF, XGBoost, and other weighted RF in binary and multi-class classification problems over 20+ real-world datasets.