Asymptotic Normality of Infinite Centered Random Forests -Application to Imbalanced Classification

📅 2025-06-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the estimation bias induced by resampling-based training of Centered Random Forests (CRFs) under extreme class imbalance. We establish the central limit theorem (CLT) for the infinite-forest limit of CRF, delivering the first asymptotic normality result with explicit convergence rate and precise constants. To mitigate bias while controlling variance in highly imbalanced settings, we propose an importance-sampling-based debiased estimator, IS-ICRF. We theoretically prove that the rebalancing-plus-debiasing strategy strictly dominates training on the original imbalanced data; empirical validation via Breiman’s random forest confirms the predicted variance reduction. Our core contributions are threefold: (i) the first rigorous CLT for rebalanced CRFs; (ii) a provably superior debiasing framework grounded in importance sampling; and (iii) a statistically sound, interpretable, and computationally tractable foundation for imbalanced learning.

Technology Category

Application Category

📝 Abstract
Many classification tasks involve imbalanced data, in which a class is largely underrepresented. Several techniques consists in creating a rebalanced dataset on which a classifier is trained. In this paper, we study theoretically such a procedure, when the classifier is a Centered Random Forests (CRF). We establish a Central Limit Theorem (CLT) on the infinite CRF with explicit rates and exact constant. We then prove that the CRF trained on the rebalanced dataset exhibits a bias, which can be removed with appropriate techniques. Based on an importance sampling (IS) approach, the resulting debiased estimator, called IS-ICRF, satisfies a CLT centered at the prediction function value. For high imbalance settings, we prove that the IS-ICRF estimator enjoys a variance reduction compared to the ICRF trained on the original data. Therefore, our theoretical analysis highlights the benefits of training random forests on a rebalanced dataset (followed by a debiasing procedure) compared to using the original data. Our theoretical results, especially the variance rates and the variance reduction, appear to be valid for Breiman's random forests in our experiments.
Problem

Research questions and friction points this paper is trying to address.

Theoretical study of Centered Random Forests for imbalanced classification
Establishing Central Limit Theorem for infinite CRF with exact rates
Proposing debiased IS-ICRF estimator for variance reduction in imbalance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Centered Random Forests with CLT
Debiasing via importance sampling
Variance reduction in imbalanced data