Subsampled Ensemble Can Improve Generalization Tail Exponentially

📅 2024-05-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Machine learning models often exhibit slow (polynomial-rate) tail decay of generalization error—and thus poor robustness—under heavy-tailed distributions and slow-convergence regimes. Method: We propose subsampling ensembling: training base learners independently on multiple disjoint data subsamples and aggregating predictions via majority voting. Contribution/Results: We provide the first theoretical guarantee showing that this method achieves *exponential* tail decay of generalization error—going beyond conventional ensembling, which only reduces variance—and does so without assumptions on the base learner’s form. By integrating heavy-tailed modeling with refined risk bound analysis, our approach overcomes intrinsic convergence bottlenecks. Experiments confirm that, under heavy-tailed and low signal-to-noise ratio settings, the tail-risk decay rate improves dramatically from polynomial to exponential, significantly enhancing model robustness against rare events and distributional shifts.

Technology Category

Application Category

📝 Abstract
Ensemble learning is a popular technique to improve the accuracy of machine learning models. It traditionally hinges on the rationale that aggregating multiple weak models can lead to better models with lower variance and hence higher stability, especially for discontinuous base learners. In this paper, we provide a new perspective on ensembling. By selecting the best model trained on subsamples via majority voting, we can attain exponentially decaying tails for the excess risk, even if the base learner suffers from slow (i.e., polynomial) decay rates. This tail enhancement power of ensembling is agnostic to the underlying base learner and is stronger than variance reduction in the sense of exhibiting rate improvement. We demonstrate how our ensemble methods can substantially improve out-of-sample performances in a range of numerical examples involving heavy-tailed data or intrinsically slow rates. Code for the proposed methods is available at https://github.com/mickeyhqian/VoteEnsemble.
Problem

Research questions and friction points this paper is trying to address.

Imbalanced Data
Learning Efficiency
Model Performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Ensemble Learning
Majority Voting
Optimal Model Selection
🔎 Similar Papers
No similar papers found.