🤖 AI Summary
In ground-based astronomical imaging, atmospheric turbulence induces severe blur, high noise, and spatially varying point spread functions (PSFs) across multiple frames, fundamentally limiting high-fidelity nighttime sky image reconstruction. To address this, we propose the first self-supervised deep image prior method specifically designed for multi-frame astronomical image restoration—requiring neither paired ground-truth data nor pre-trained models. Our approach employs a customized CNN that jointly models inter-frame features, explicitly enforces photometric consistency constraints, and incorporates physically grounded spatially varying PSF priors. Furthermore, it unifies frame alignment and co-weighted reconstruction within a single optimization framework. Evaluated on real Hyper Suprime-Cam data, our method achieves substantial improvements in image sharpness and signal-to-noise ratio, yields more compact stellar point sources, and recovers richer structural details. This work establishes a novel unsupervised paradigm for astronomical image restoration.
📝 Abstract
Recovering high-fidelity images of the night sky from blurred observations is a fundamental problem in astronomy, where traditional methods typically fall short. In ground-based astronomy, combining multiple exposures to enhance signal-to-noise ratios is further complicated by variations in the point-spread function caused by atmospheric turbulence. In this work, we present a self-supervised multi-frame method, based on deep image priors, for denoising, deblurring, and coadding ground-based exposures. Central to our approach is a carefully designed convolutional neural network that integrates information across multiple observations and enforces physically motivated constraints. We demonstrate the method's potential by processing Hyper Suprime-Cam exposures, yielding promising preliminary results with sharper restored images.