🤖 AI Summary
To address the low spatial resolution of onboard hyperspectral imagery—hindering real-time downstream applications—this paper proposes a track-wise causal deep neural network architecture tailored for push-broom imaging. The method innovatively incorporates a causal memory mechanism to model spectral-spatial temporal dependencies line-by-line, achieving high reconstruction fidelity while substantially reducing memory footprint and computational cost. Integrated with lightweight network design, causal convolutions, and hardware-aware optimizations, the architecture is specifically adapted to resource-constrained, low-power onboard platforms. Experimental results demonstrate that the proposed approach matches or surpasses state-of-the-art complex models in quantitative metrics (e.g., PSNR and SSIM), enables single-line processing synchronized with imaging acquisition, and achieves, for the first time, real-time onboard super-resolution reconstruction. This work establishes a viable technical pathway for intelligent in-orbit processing of hyperspectral satellite data.
📝 Abstract
Hyperspectral imagers on satellites obtain the fine spectral signatures essential for distinguishing one material from another at the expense of limited spatial resolution. Enhancing the latter is thus a desirable preprocessing step in order to further improve the detection capabilities offered by hyperspectral images on downstream tasks. At the same time, there is a growing interest towards deploying inference methods directly onboard of satellites, which calls for lightweight image super-resolution methods that can be run on the payload in real time. In this paper, we present a novel neural network design, called Deep Pushbroom Super-Resolution (DPSR) that matches the pushbroom acquisition of hyperspectral sensors by processing an image line by line in the along-track direction with a causal memory mechanism to exploit previously acquired lines. This design greatly limits memory requirements and computational complexity, achieving onboard real-time performance, i.e., the ability to super-resolve a line in the time it takes to acquire the next one, on low-power hardware. Experiments show that the quality of the super-resolved images is competitive or even outperforms state-of-the-art methods that are significantly more complex.