🤖 AI Summary
Tree ensemble models suffer from poor interpretability and difficulty in revealing complex variable interactions. Method: This paper proposes a novel rule extraction framework featuring (i) a regularized estimator that jointly controls the number of rules and interaction depth; (ii) customized exact algorithms balancing accuracy and computational efficiency, complemented by a regularized path approximation algorithm; and (iii) non-asymptotic error bounds achieving oracle-level theoretical performance. Contribution/Results: Integrating statistical learning theory with combinatorial optimization, the framework significantly outperforms existing rule extraction methods across multiple benchmark datasets. It yields compact, human-interpretable rule sets while preserving high predictive accuracy—achieving an optimal trade-off between fidelity and interpretability.
📝 Abstract
Tree ensembles are non-parametric methods widely recognized for their accuracy and ability to capture complex interactions. While these models excel at prediction, they are difficult to interpret and may fail to uncover useful relationships in the data. We propose an estimator to extract compact sets of decision rules from tree ensembles. The extracted models are accurate and can be manually examined to reveal relationships between the predictors and the response. A key novelty of our estimator is the flexibility to jointly control the number of rules extracted and the interaction depth of each rule, which improves accuracy. We develop a tailored exact algorithm to efficiently solve optimization problems underlying our estimator and an approximate algorithm for computing regularization paths, sequences of solutions that correspond to varying model sizes. We also establish novel non-asymptotic prediction error bounds for our proposed approach, comparing it to an oracle that chooses the best data-dependent linear combination of the rules in the ensemble subject to the same complexity constraint as our estimator. The bounds illustrate that the large-sample predictive performance of our estimator is on par with that of the oracle. Through experiments, we demonstrate that our estimator outperforms existing algorithms for rule extraction.