GRAND for Gaussian Intersymbol Interference Channels

📅 2026-03-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high computational complexity of maximum likelihood (ML) decoding over Gaussian intersymbol interference (ISI) channels with memory by extending the Guessing Random Additive Noise Decoding (GRAND) framework to linear Gaussian ISI channels for the first time. The proposed soft GRAND-ISI algorithm incorporates sequence reliability ordering and burst error modeling to achieve near-ML performance. Furthermore, a low-complexity ORB GRAND approximation is devised to reduce computational overhead. At a block error rate of $10^{-3}$, the method yields several decibels of gain over conventional GRAND that ignores channel memory, outperforms the state-of-the-art ORBGRAND-AI by at least 0.5 dB, and operates within only 0.1–0.2 dB of the ML lower bound, all while significantly lowering decoding complexity.

Technology Category

Application Category

📝 Abstract
Channel decoding is a challenging task in communication channels exhibiting memory effects. In this work, we apply the recently proposed decoding paradigm of guessing random additive noise decoding (GRAND) to channels with memory, focusing on linear Gaussian intersymbol interference (ISI) channels. For describing error patterns (EPs), we introduce the concept of error burst to account for the memory effect, and define sequence reliability to characterize the likelihood of EP. Based on sequence reliability, we obtain the optimal GRAND algorithm as a generalization of soft GRAND (SGRAND) for linear Gaussian ISI channels, termed SGRAND-ISI, which is equivalent to the maximum-likelihood (ML) decoding algorithm. We then develop order-reliability-bit (ORB) GRAND algorithms based on SGRAND-ISI, to facilitate implementation. In numerical experiments, our proposed algorithms achieve multiple-dB improvements compared to GRAND algorithms which ignore channel memory, and can often attain performance within 0.1--0.2dB of the ML lower bound. We also compare our proposed algorithms with the recently proposed ORBGRAND-Approximate Independence algorithm for handling channel memory, and observe a performance gain of at least 0.5dB at block error rate of $10^{-3}$, meanwhile incurring a substantially lower computational complexity.
Problem

Research questions and friction points this paper is trying to address.

intersymbol interference
channel decoding
memory effects
Gaussian channels
error patterns
Innovation

Methods, ideas, or system contributions that make the work stand out.

GRAND
intersymbol interference
sequence reliability
error burst
maximum-likelihood decoding
🔎 Similar Papers
No similar papers found.
Z
Zhuang Li
Department of Electronic Engineering and Information Science, University of Science and Technology of China, Hefei, China
Wenyi Zhang
Wenyi Zhang
University of Science and Technology of China
wireless communicationsinformation theorystatistical inference