🤖 AI Summary
To address the high latency of iterative sampling in diffusion-based error-correcting code (ECC) decoders—rendering them unsuitable for low-latency communication—this paper proposes an architecture-agnostic single-step neural decoding framework. The core method integrates consistency learning with probability flow ordinary differential equations (PF-ODEs), enabling direct noise-to-codeword mapping via differential-time regularization and eliminating multi-step sampling entirely. The framework is compatible with diverse backbone architectures and achieves state-of-the-art bit error rate (BER) performance across multiple benchmarks. Notably, it significantly outperforms autoregressive and diffusion-based decoders—especially on long codes—while accelerating inference by 30–100×. This yields a compelling trade-off between accuracy and real-time capability, establishing a new efficient paradigm for reliable digital communication.
📝 Abstract
Error Correction Codes (ECC) are fundamental to reliable digital communication, yet designing neural decoders that are both accurate and computationally efficient remains challenging. Recent denoising diffusion decoders with transformer backbones achieve state-of-the-art performance, but their iterative sampling limits practicality in low-latency settings. We introduce the Error Correction Consistency Flow Model (ECCFM), an architecture-agnostic training framework for high-fidelity one-step decoding. By casting the reverse denoising process as a Probability Flow Ordinary Differential Equation (PF-ODE) and enforcing smoothness through a differential time regularization, ECCFM learns to map noisy signals along the decoding trajectory directly to the original codeword in a single inference step. Across multiple decoding benchmarks, ECCFM attains lower bit-error rates (BER) than autoregressive and diffusion-based baselines, with notable improvements on longer codes, while delivering inference speeds up from 30x to 100x faster than denoising diffusion decoders.