🤖 AI Summary
This work addresses the challenge of reconstructing the three-dimensional dark matter distribution from single-view weak gravitational lensing 2D images—overcoming the limitations of conventional two-dimensional projected analyses to enable precise structural localization and cosmological model testing. We propose the Gravitational-Constraint Neural Field (GCNF), a continuous, structure-agnostic neural representation of the dark matter field. GCNF is jointly optimized end-to-end with a differentiable weak-lensing forward model grounded in physical principles, and incorporates uncertainty-aware galaxy shape modeling. Evaluated on high-fidelity synthetic data, GCNF substantially outperforms existing methods, achieving the first high-fidelity 3D reconstruction of unexpected large-scale structures. This establishes a novel paradigm for dark matter field inversion, enabling physically consistent, resolution-agnostic, and uncertainty-quantified 3D cosmic structure recovery from sparse, noisy lensing observations.
📝 Abstract
Weak gravitational lensing is the slight distortion of galaxy shapes caused primarily by the gravitational effects of dark matter in the universe. In our work, we seek to invert the weak lensing signal from 2D telescope images to reconstruct a 3D map of the universe's dark matter field. While inversion typically yields a 2D projection of the dark matter field, accurate 3D maps of the dark matter distribution are essential for localizing structures of interest and testing theories of our universe. However, 3D inversion poses significant challenges. First, unlike standard 3D reconstruction that relies on multiple viewpoints, in this case, images are only observed from a single viewpoint. This challenge can be partially addressed by observing how galaxy emitters throughout the volume are lensed. However, this leads to the second challenge: the shapes and exact locations of unlensed galaxies are unknown, and can only be estimated with a very large degree of uncertainty. This introduces an overwhelming amount of noise which nearly drowns out the lensing signal completely. Previous approaches tackle this by imposing strong assumptions about the structures in the volume. We instead propose a methodology using a gravitationally-constrained neural field to flexibly model the continuous matter distribution. We take an analysis-by-synthesis approach, optimizing the weights of the neural network through a fully differentiable physical forward model to reproduce the lensing signal present in image measurements. We showcase our method on simulations, including realistic simulated measurements of dark matter distributions that mimic data from upcoming telescope surveys. Our results show that our method can not only outperform previous methods, but importantly is also able to recover potentially surprising dark matter structures.