On the choice of the non-trainable internal weights in random feature maps

📅 2024-08-07
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited prediction accuracy of random feature mappings in learning one-step propagation operators for dynamical systems, caused by suboptimal fixed hidden-layer weights. We propose an efficient Hit-and-Run–based random sampling strategy to optimize non-trainable internal weights. We formally define and reveal “number of high-quality features” as a core criterion—proving its equivalence to effective feature dimensionality. Unlike trainable single-layer networks, our method computes output weights via linear regression only, reducing computational cost by one to two orders of magnitude while achieving significantly higher prediction accuracy. Extensive experiments demonstrate superior accuracy and strong generalization across diverse dynamical system modeling tasks. This work provides both theoretical foundations and a practical paradigm for gradient-free random feature learning.

Technology Category

Application Category

📝 Abstract
The computationally cheap machine learning architecture of random feature maps can be viewed as a single-layer feedforward network in which the weights of the hidden layer are random but fixed and only the outer weights are learned via linear regression. The internal weights are typically chosen from a prescribed distribution. The choice of the internal weights significantly impacts the accuracy of random feature maps. We address here the task of how to best select the internal weights. In particular, we consider the forecasting problem whereby random feature maps are used to learn a one-step propagator map for a dynamical system. We provide a computationally cheap hit-and-run algorithm to select good internal weights which lead to good forecasting skill. We show that the number of good features is the main factor controlling the forecasting skill of random feature maps and acts as an effective feature dimension. Lastly, we compare random feature maps with single-layer feedforward neural networks in which the internal weights are now learned using gradient descent. We find that random feature maps have superior forecasting capabilities whilst having several orders of magnitude lower computational cost.
Problem

Research questions and friction points this paper is trying to address.

Optimizing non-trainable internal weights in random feature maps
Improving forecasting accuracy for dynamical systems using feature selection
Comparing performance and cost of random feature maps vs neural networks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hit-and-run algorithm selects optimal internal weights
Good feature count controls forecasting accuracy
Random feature maps outperform neural networks cost-effectively
🔎 Similar Papers
No similar papers found.