🤖 AI Summary
Cosmological parameter inference is bottlenecked by the high computational cost of numerical simulators. This work introduces a symbolic surrogate model based on symbolic regression and hypergeometric function approximations, enabling rapid evaluation of the comoving distance (accuracy < 0.001%) and linear growth factor (accuracy < 0.05%) across the full ΛCDM parameter space. The model is integrated into a DES-like 3×2pt likelihood analysis pipeline, preserving physical fidelity while substantially reducing memory footprint and runtime. Compared to standard numerical methods, it yields statistically consistent posterior constraints but achieves 1–2 orders-of-magnitude speedup. Its core contribution is the first symbolic cosmological surrogate that simultaneously achieves high accuracy, full-parameter-space coverage, and low computational overhead—establishing a new paradigm for real-time parameter inference in next-generation large-scale surveys.
📝 Abstract
In cosmology, emulators play a crucial role by providing fast and accurate predictions of complex physical models, enabling efficient exploration of high-dimensional parameter spaces that would be computationally prohibitive with direct numerical simulations. Symbolic emulators have emerged as promising alternatives to numerical approaches, delivering comparable accuracy with significantly faster evaluation times. While previous symbolic emulators were limited to relatively narrow prior ranges, we expand these to cover the parameter space relevant for current cosmological analyses. We introduce approximations to hypergeometric functions used for the $Lambda$CDM comoving distance and linear growth factor which are accurate to better than 0.001% and 0.05%, respectively, for all redshifts and for $Omega_{
m m} in [0.1, 0.5]$. We show that integrating symbolic emulators into a Dark Energy Survey-like $3 imes2$pt analysis produces cosmological constraints consistent with those obtained using standard numerical methods. Our symbolic emulators offer substantial improvements in speed and memory usage, demonstrating their practical potential for scalable, likelihood-based inference.