🤖 AI Summary
The BCJR algorithm for optimal maximum a posteriori (MAP) decoding of convolutional codes suffers from high iteration counts and substantial hardware latency. Method: This paper proposes a Linear MAP (LMAP) decoder that reformulates BCJR as a pair of forward/backward soft-input soft-output (SISO) encoders, enabling purely linear, bidirectional decoding implemented solely with shift registers—eliminating all iterations, message normalization, and nonlinear operations. Contribution/Results: LMAP is the first fully non-iterative, normalization-free, and nonlinear-operation-free MAP decoding architecture, featuring straightforward and natural hardware mapping. Experimental results show that at BLER = 10⁻³, LMAP achieves performance identical to classical BCJR, improves upon the random-coding union (RCU) bound by approximately 0.5 dB, and approaches both the normal approximation (NA) bound and the meta-converse bound. Moreover, decoding latency is significantly reduced, making LMAP suitable for ultra-low-latency communication systems.
📝 Abstract
In this paper, we propose a linear representation of BCJR maximum a posteriori probability (MAP) decoding of a rate 1/2 convolutional code (CC), referred to as the linear MAP decoding (LMAP). We discover that the MAP forward and backward decoding can be implemented by the corresponding dual soft input and soft output (SISO) encoders using shift registers. The bidrectional MAP decoding output can be obtained by combining the contents of respective forward and backward dual encoders. Represented using simple shift-registers, LMAP decoder maps naturally to hardware registers and thus can be easily implemented. Simulation results demonstrate that the LMAP decoding achieves the same performance as the BCJR MAP decoding, but has a significantly reduced decoding delay. For the block length 64, the CC of the memory length 14 with LMAP decoding surpasses the random coding union (RCU) bound by approximately 0.5 dB at a BLER of $10^{-3}$, and closely approaches both the normal approximation (NA) and meta-converse (MC) bounds.