Random Feature Representation Boosting

πŸ“… 2025-01-30
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This paper addresses the challenge of simultaneously achieving high performance, computational efficiency, and theoretical reliability for deep neural networks on tabular data. We propose RFRBoost: a deep residual random feature network grounded in boosting theory. Its core innovation is the systematic integration of the boosting framework into random feature network designβ€”each layer explicitly approximates the gradient of the current residual via random feature mapping, enabling layer-wise greedy optimization. For mean squared error (MSE) loss, we derive a closed-form analytical solution; for general losses, we formulate a least-squares optimization paradigm with quadratic constraints. Evaluated on 91 benchmark tabular datasets, RFRBoost consistently outperforms conventional random-feature neural networks (RFNNs) as well as end-to-end trained MLPs and ResNets, delivering simultaneous improvements in predictive accuracy, training speed, and theoretical interpretability.

Technology Category

Application Category

πŸ“ Abstract
We introduce Random Feature Representation Boosting (RFRBoost), a novel method for constructing deep residual random feature neural networks (RFNNs) using boosting theory. RFRBoost uses random features at each layer to learn the functional gradient of the network representation, enhancing performance while preserving the convex optimization benefits of RFNNs. In the case of MSE loss, we obtain closed-form solutions to greedy layer-wise boosting with random features. For general loss functions, we show that fitting random feature residual blocks reduces to solving a quadratically constrained least squares problem. We demonstrate, through numerical experiments on 91 tabular datasets for regression and classification, that RFRBoost significantly outperforms traditional RFNNs and end-to-end trained MLP ResNets, while offering substantial computational advantages and theoretical guarantees stemming from boosting theory.
Problem

Research questions and friction points this paper is trying to address.

Deep Neural Networks
Performance Optimization
Reliability Theory
Innovation

Methods, ideas, or system contributions that make the work stand out.

RFRBoost
Deep Random Neural Networks
Gradient Learning
πŸ”Ž Similar Papers
No similar papers found.