Iterative Neural Rollback Chase-Pyndiah Decoding

📅 2025-06-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In iterative decoding of Turbo Product Codes (TPCs), early Chase–Pyndiah component decoders suffer from inaccurate soft decisions, leading to extrinsic information contamination and performance degradation. Method: This paper proposes a neural network–driven dynamic rollback mechanism that embeds a lightweight Transformer into the Chase–Pyndiah process to dynamically identify and selectively freeze destructive extrinsic information updates—breaking the conventional fixed-iteration paradigm. Contribution/Results: Evaluated on (256,239) extended BCH component codes, the method achieves a 0.145 dB SNR gain over standard Chase–Pyndiah decoding with p=6 at only four iterations, outperforming even p=7 decoding, while employing merely tens of thousands of parameters. The approach achieves an exceptional trade-off between low computational complexity and high decoding performance, establishing a novel paradigm for efficient soft-decision decoding in optical communications and other latency- and resource-constrained applications.

Technology Category

Application Category

📝 Abstract
Iterative decoding is essential in modern communication systems, especially optical communications, where error-correcting codes such as turbo product codes (TPC) and staircase codes are widely employed. A key factor in achieving high error correction performance is the use of soft-decision decoding for component codes. However, implementing optimal maximum a posteriori (MAP) probability decoding for commonly used component codes, such as BCH and Polar codes, is computationally prohibitive. Instead, practical systems rely on approximations, with the Chase-Pyndiah algorithm being a widely used suboptimal method. TPC are more powerful than their component codes and begin to function effectively at low signal-to-noise ratios. Consequently, during the initial iterations, the component codes do not perform well and introduce errors in the extrinsic information updates. This phenomenon limits the performance of TPC. This paper proposes a neural network-aided rollback Chase-Pyndiah decoding method to address this issue. A transformer-based neural network identifies cases where extrinsic updates are likely to introduce errors, triggering a rollback mechanism which prevents the update and keeps the component code message intact. Our results demonstrate that a neural network with a relatively small number of parameters can effectively distinguish destructive updates and improve decoding performance. We evaluate the proposed approach using TPC with (256, 239) extended BCH component codes. We show that the proposed method enhances the bit error rate performance of Chase-Pyndiah p=6 decoding, achieving a gain of approximately 0.145 dB in a TPC scheme with four full iterations, significantly outperforming conventional Chase p=7 decoding.
Problem

Research questions and friction points this paper is trying to address.

Improving error correction in iterative decoding systems
Reducing computational cost of optimal MAP decoding
Enhancing Chase-Pyndiah algorithm with neural rollback
Innovation

Methods, ideas, or system contributions that make the work stand out.

Neural network-aided rollback Chase-Pyndiah decoding
Transformer-based network identifies destructive extrinsic updates
Rollback mechanism prevents errors in component code messages
🔎 Similar Papers
2024-03-04Computer Vision and Pattern RecognitionCitations: 3