🤖 AI Summary
GRAND decoding suffers from poor parallelism and high latency due to sequential testing of error patterns (EPs). To address this, we propose an EP-tree (EPT)-based parallel decoding framework, the first to uniformly model EPs as a binary tree structure—enabling efficient tree traversal and parallel pruning while strictly guaranteeing maximum-likelihood (ML) performance. Building on EPT, we design parallelized SGRAND and hybrid ORBGRAND algorithms that jointly exploit soft information and reliability ordering for fine-grained parallel exploration and dynamic path pruning. Experimental results show that parallel SGRAND achieves a 3.75× speedup over its serial counterpart, while hybrid ORBGRAND attains a 4.8× acceleration; both demonstrate strong hardware mapping potential. Our core innovations lie in the EPT modeling paradigm and a synergistic decoding mechanism that simultaneously ensures ML optimality and high parallel efficiency.
📝 Abstract
Advances in parallel hardware platforms have motivated the development of efficient universal decoders capable of meeting stringent throughput and latency requirements. Guessing Random Additive Noise Decoding (GRAND) is a recently proposed decoding paradigm that sequentially tests Error Patterns (EPs) until finding a valid codeword. While Soft GRAND (SGRAND) achieves maximum-likelihood (ML) decoding, its inherently sequential nature hinders parallelism and results in high decoding latency. In this work, we utilize a unified binary tree representation of EPs, termed the EP tree, which enables compact representation, efficient manipulation, and parallel exploration. Building upon this EP tree representation, we propose a parallel design of SGRAND, preserving its ML optimality while significantly reducing decoding latency through pruning strategies and tree-based computation. Furthermore, we develop a hybrid GRAND algorithm that enhances Ordered Reliability Bits (ORB) GRAND with the EP tree representation, thereby achieving ML decoding with minimal additional computational cost beyond ORBGRAND while retaining parallel efficiency. Numerical experiments demonstrate that parallel SGRAND achieves a $3.75 imes$ acceleration compared to serial implementation, while the hybrid enhanced method achieves a $4.8 imes$ acceleration, with further gains expected under hardware mapping.