Learning Phase Distortion with Selective State Space Models for Video Turbulence Mitigation

📅 2025-04-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address atmospheric turbulence-induced video degradation in long-range imaging, this paper proposes the first end-to-end turbulence mitigation method integrating a selective state-space model (Mamba) with a learnable latent phase distortion (LPD) representation. The method overcomes the limited receptive field of conventional CNNs and the high computational complexity of Transformers by introducing a Zernike-free, physics-inspired LPD to model wavefront phase perturbations, while jointly evolving spatiotemporal states to alleviate the ill-posedness of the inverse problem. Evaluated on both synthetic and real-world datasets, the approach achieves new state-of-the-art performance, with significant gains in PSNR and SSIM. It also delivers several-fold speedup in inference latency and enables efficient parallel processing of long video sequences. This work establishes a novel paradigm for real-time, high-resolution turbulence correction.

Technology Category

Application Category

📝 Abstract
Atmospheric turbulence is a major source of image degradation in long-range imaging systems. Although numerous deep learning-based turbulence mitigation (TM) methods have been proposed, many are slow, memory-hungry, and do not generalize well. In the spatial domain, methods based on convolutional operators have a limited receptive field, so they cannot handle a large spatial dependency required by turbulence. In the temporal domain, methods relying on self-attention can, in theory, leverage the lucky effects of turbulence, but their quadratic complexity makes it difficult to scale to many frames. Traditional recurrent aggregation methods face parallelization challenges. In this paper, we present a new TM method based on two concepts: (1) A turbulence mitigation network based on the Selective State Space Model (MambaTM). MambaTM provides a global receptive field in each layer across spatial and temporal dimensions while maintaining linear computational complexity. (2) Learned Latent Phase Distortion (LPD). LPD guides the state space model. Unlike classical Zernike-based representations of phase distortion, the new LPD map uniquely captures the actual effects of turbulence, significantly improving the model's capability to estimate degradation by reducing the ill-posedness. Our proposed method exceeds current state-of-the-art networks on various synthetic and real-world TM benchmarks with significantly faster inference speed. The code is available at http://github.com/xg416/MambaTM.
Problem

Research questions and friction points this paper is trying to address.

Mitigating video turbulence in long-range imaging systems
Overcoming limitations of slow, memory-heavy deep learning methods
Addressing spatial and temporal dependency challenges in turbulence
Innovation

Methods, ideas, or system contributions that make the work stand out.

Selective State Space Model for turbulence mitigation
Learned Latent Phase Distortion guides state space
Linear complexity with global receptive field
🔎 Similar Papers
No similar papers found.
Xingguang Zhang
Xingguang Zhang
Purdue University
Image and video processingcomputational imagingcomputer visiongenerative models
N
Nicholas Chimitt
School of Electrical and Computer Engineering, Purdue University
X
Xijun Wang
School of Electrical and Computer Engineering, Purdue University
Y
Yu Yuan
School of Electrical and Computer Engineering, Purdue University
S
Stanley H. Chan
School of Electrical and Computer Engineering, Purdue University