Extreme Learning Machines for Exoplanet Simulations: A Faster, Lightweight Alternative to Deep Learning

📅 2025-06-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
High-resolution astrophysical and climate simulations suffer from prohibitively high model complexity and training costs. Method: This work pioneers the application of Extreme Learning Machines (ELMs) to surrogate modeling in astronomy and climate science, proposing a lightweight, gradient-free alternative. We design ELM architectures—both standalone and ensemble—for sequential and image data, benchmarking them against mainstream neural networks including BiRNNs and CNNs. Results: On exoplanet simulation tasks, ELMs achieve a 10⁵× speedup in training and 40× in inference over BiRNNs on sequential data, while attaining superior accuracy; on image data, they match CNN accuracy at 16.4× faster training. This study uncovers a novel trade-off between sample efficiency and model complexity, empirically validating ELMs’ computational efficiency and generalization capability for scientific simulation surrogates.

Technology Category

Application Category

📝 Abstract
Increasing resolution and coverage of astrophysical and climate data necessitates increasingly sophisticated models, often pushing the limits of computational feasibility. While emulation methods can reduce calculation costs, the neural architectures typically used--optimised via gradient descent--are themselves computationally expensive to train, particularly in terms of data generation requirements. This paper investigates the utility of the Extreme Learning Machine (ELM) as a lightweight, non-gradient-based machine learning algorithm for accelerating complex physical models. We evaluate ELM surrogate models in two test cases with different data structures: (i) sequentially-structured data, and (ii) image-structured data. For test case (i), where the number of samples $N$ >> the dimensionality of input data $d$, ELMs achieve remarkable efficiency, offering a 100,000$ imes$ faster training time and a 40$ imes$ faster prediction speed compared to a Bi-Directional Recurrent Neural Network (BIRNN), whilst improving upon BIRNN test performance. For test case (ii), characterised by $d >> N$ and image-based inputs, a single ELM was insufficient, but an ensemble of 50 individual ELM predictors achieves comparable accuracy to a benchmark Convolutional Neural Network (CNN), with a 16.4$ imes$ reduction in training time, though costing a 6.9$ imes$ increase in prediction time. We find different sample efficiency characteristics between the test cases: in test case (i) individual ELMs demonstrate superior sample efficiency, requiring only 0.28% of the training dataset compared to the benchmark BIRNN, while in test case (ii) the ensemble approach requires 78% of the data used by the CNN to achieve comparable results--representing a trade-off between sample efficiency and model complexity.
Problem

Research questions and friction points this paper is trying to address.

ELM as a faster alternative to deep learning for exoplanet simulations
Evaluating ELM efficiency in sequential and image-structured data cases
Comparing ELM performance with BIRNN and CNN in sample efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Extreme Learning Machine for faster training
Applies ELM ensemble for image data accuracy
Achieves significant speedup over traditional networks
🔎 Similar Papers
No similar papers found.