🤖 AI Summary
In same-domain repeated access scenarios, decision trees and their ensembles (e.g., random forests, GBDTs) struggle to jointly optimize accuracy and interpretability.
Method: This paper proposes a learnable, tunable unified tree framework. It introduces (1) a parameterized splitting criterion that continuously interpolates between entropy and Gini impurity, enabling data-adaptive optimal splits; (2) a theoretical characterization of sample complexity, providing generalization guarantees for the interpretability–accuracy trade-off; and (3) joint optimization of Bayesian decision trees, minimum-cost-complexity pruning, and ensemble hyperparameters.
Results: Extensive experiments on real-world datasets demonstrate that the framework significantly improves the consistency between predictive accuracy and model interpretability. It offers both theoretical rigor—via provable generalization bounds—and practical utility—through end-to-end differentiability and seamless integration into existing tree-based pipelines. The approach bridges a critical gap between statistical performance and human-understandable structure in tree learning.
📝 Abstract
Decision trees and their ensembles are popular in machine learning as easy-to-understand models. Several techniques have been proposed in the literature for learning tree-based classifiers, with different techniques working well for data from different domains. In this work, we develop approaches to design tree-based learning algorithms given repeated access to data from the same domain. We study multiple formulations covering different aspects and popular techniques for learning decision tree based approaches. We propose novel parameterized classes of node splitting criteria in top-down algorithms, which interpolate between popularly used entropy and Gini impurity based criteria, and provide theoretical bounds on the number of samples needed to learn the splitting function appropriate for the data at hand. We also study the sample complexity of tuning prior parameters in Bayesian decision tree learning, and extend our results to decision tree regression. We further consider the problem of tuning hyperparameters in pruning the decision tree for classical pruning algorithms including min-cost complexity pruning. In addition, our techniques can be used to optimize the explainability versus accuracy trade-off when using decision trees. We extend our results to tuning popular tree-based ensembles, including random forests and gradient-boosted trees. We demonstrate the significance of our approach on real world datasets by learning data-specific decision trees which are simultaneously more accurate and interpretable.