🤖 AI Summary
This work proposes a two-stage near-maximum-likelihood (near-ML) decoding framework tailored for ultra-reliable low-latency communication with short blocklengths, where conventional near-ML decoders achieve excellent error performance at the cost of high complexity. In the first stage, a low-complexity decoder rapidly generates candidate codewords; if decoding fails, a second stage activates multi-point weighted sphere decoding (MP-WSD). MP-WSD leverages precomputed low-weight codewords to construct structured local perturbations and efficiently searches for superior solutions within an adaptively tightened Euclidean sphere. Integrated with CRC-aided decision making and low-weight codeword guidance, the proposed method maintains near-ML error performance while significantly reducing average decoding complexity—particularly at high signal-to-noise ratios, where the second stage is rarely invoked—thus achieving an effective balance between low latency and high reliability.
📝 Abstract
Ultra-reliable low-latency communications (URLLC) operate with short packets, where finite-blocklength effects make near-maximum-likelihood (near-ML) decoding desirable but often too costly. This paper proposes a two-stage near-ML decoding framework that applies to any linear block code. In the first stage, we run a low-complexity decoder to produce a candidate codeword and a cyclic redundancy check. When this stage succeeds, we terminate immediately. When it fails, we invoke a second-stage decoder, termed multipoint code-weight sphere decoding (MP-WSD). The central idea behind {MP-WSD} is to concentrate the ML search where it matters. We pre-compute a set of low-weight codewords and use them to generate structured local perturbations of the current estimate. Starting from the first-stage output, MP-WSD iteratively explores a small Euclidean sphere of candidate codewords formed by adding selected low-weight codewords, tightening the search region as better candidates are found. This design keeps the average complexity low: at high signal-to-noise ratio, the first stage succeeds with high probability and the second stage is rarely activated; when it is activated, the search remains localized. Simulation results show that the proposed decoder attains near-ML performance for short-blocklength, low-rate codes while maintaining low decoding latency.