🤖 AI Summary
Channel estimation in Rydberg atom quantum receivers (RAQRs) suffers from severe nonlinearity due to bias-phase retrieval, especially under low signal-to-noise ratio (SNR); conventional iterative algorithms exhibit poor accuracy and robustness and fail to model system non-idealities. Method: We propose a model-driven deep learning framework built upon an enhanced Expectation-Maximization–Gerchberg–Saxton (EM-GS) algorithm, featuring URformer—a novel network integrating learnable filtering modules, adaptive gating mechanisms, and channel-wise attention-based Transformer blocks—explicitly embedding physical priors and phase reconstruction theory for end-to-end joint optimization. Contribution/Results: Experiments demonstrate that the proposed method significantly outperforms both traditional iterative algorithms and black-box neural networks under low-SNR conditions, achieving over 30% improvement in channel estimation accuracy and reducing pilot overhead by 40%, while exhibiting superior generalization and robustness against hardware imperfections.
📝 Abstract
The advent of Rydberg atomic quantum receivers (RAQRs) offers a new solution for the evolution of wireless transceiver architecture, promising unprecedented sensitivity and immunity to thermal noise. However, RAQRs introduce a unique non-linear signal model based on biased phase retrieval, which complicates fundamental channel estimation tasks. Traditional iterative algorithms often struggle in low signal-to-noise regimes and fail to capture complex and non-ideal system characteristics. To address this, we propose a novel model-driven deep learning framework for channel estimation in RAQRs. Specifically, we propose a Transformer-based unrolling architecture, termed URformer, which is derived by unrolling a stabilized variant of the expectation-maximization Gerchberg-Saxton (EM-GS) algorithm. Specifically, each layer of the proposed URformer incorporates three trainable modules: 1) a learnable filter implemented by a neural network that replaces the fixed Bessel function ratio in the classic EM-GS algorithm; 2) a trainable gating mechanism that adaptively combines classic and model-based updates to ensure training stability; and 3) a efficient channel Transformer block that learns to correct residual errors by capturing non-local dependencies across the channel matrix. Numerical results demonstrate that the proposed URformer significantly outperforms classic iterative algorithms and conventional black-box neural networks with less pilot overhead.