🤖 AI Summary
Traditional ensemble methods (e.g., Bagging, Boosting) suffer from high computational overhead and poor adaptability to heterogeneous data distributions. To address these limitations, we propose Hellsemble—a computationally efficient, progressive ensemble framework for binary classification. Hellsemble first partitions instances into dynamic “difficulty tiers” based on instance-level difficulty estimation; a learned router then progressively routes challenging samples across tiers to specialized base learners, enabling model specialization and load balancing. Its core innovations are a difficulty-driven dynamic routing mechanism and a progressive training paradigm, jointly enhancing interpretability, generalization, and inference efficiency. Evaluated on the OpenML-CC18 and Tabzilla benchmarks, Hellsemble consistently outperforms state-of-the-art ensemble methods, achieving an average accuracy gain of 2.1% and accelerating inference by 3.4×.
📝 Abstract
Ensemble learning has proven effective in boosting predictive performance, but traditional methods such as bagging, boosting, and dynamic ensemble selection (DES) suffer from high computational cost and limited adaptability to heterogeneous data distributions. To address these limitations, we propose Hellsemble, a novel and interpretable ensemble framework for binary classification that leverages dataset complexity during both training and inference. Hellsemble incrementally partitions the dataset into circles of difficulty by iteratively passing misclassified instances from simpler models to subsequent ones, forming a committee of specialised base learners. Each model is trained on increasingly challenging subsets, while a separate router model learns to assign new instances to the most suitable base model based on inferred difficulty. Hellsemble achieves strong classification accuracy while maintaining computational efficiency and interpretability. Experimental results on OpenML-CC18 and Tabzilla benchmarks demonstrate that Hellsemble often outperforms classical ensemble methods. Our findings suggest that embracing instance-level difficulty offers a promising direction for constructing efficient and robust ensemble systems.