Universal Maximum Likelihood (List) Decoding via Fast Vector-Matrix Multiplication

📅 2025-10-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Maximum-likelihood decoding (MLD) of general block codes suffers from worst-case computational complexity of $q^k n$, where $q$ is the alphabet size, $k$ the dimension, and $n$ the length—rendering it impractical for large codes. Method: This paper proposes a universal decoding framework that unifies likelihood computation over all codewords as a single vector–matrix multiplication and accelerates it via the Mailman algorithm. The approach supports hard/soft decision decoding, nonlinear codes, and list decoding without requiring any structural assumptions (e.g., linearity or algebraic structure). By precomputing a codebook matrix and leveraging efficient vector–matrix operations, it reduces the worst-case number of multiplications from $q^k n$ to $q^k$. Contribution/Results: The method achieves an $n$-fold theoretical speedup while preserving MLD optimality. Experiments confirm substantial runtime improvements without performance degradation, establishing the first decoding paradigm that simultaneously guarantees ML optimality and high efficiency for arbitrary block codes.

Technology Category

Application Category

📝 Abstract
Maximum-likelihood (ML) decoding for arbitrary block codes remains fundamentally hard, with worst-case time complexity-measured by the total number of multiplications-being no better than straightforward exhaustive search, which requires $q^{k} n$ operations for an $[n,k]_q$ code. This paper introduces a simple, code-agnostic framework that reduces the worst-case complexity by a factor of $n$, down to $q^{k}$ operations, a highly desirable reduction in practice. The result holds for both linear and nonlinear block codes over general memoryless channels and under both hard-decision and soft-decision decoding. It naturally extends to intersymbol-interference (ISI) channels and ML list decoding with only a negligible increase in complexity. Our core insight is that, upon receipt of each sequence at the receiver, the conditional probability of that sequence for each codeword in the codebook (i.e., the emph{likelihood}) can be expressed as the inner product of two carefully constructed vectors -- the first depending on the received sequence, and the second on that codeword itself. As a result, evaluating the likelihoods for all codewords in the codebook reduces to a single vector-matrix multiplication, and ML decoding (MLD) becomes the simple task of picking the maximum entry in the resulting vector. The only non-trivial cost lies in the vector-matrix product. However, our matrix construction allows the use of the Mailman algorithm to reduce this cost. This time reduction is achieved at the cost of high space complexity, requiring $mathcal{O}(q^{k+1} n)$ space to store the pre-computed codebook matrix.
Problem

Research questions and friction points this paper is trying to address.

Reduces worst-case ML decoding complexity from q^k n to q^k operations
Applies to linear/nonlinear codes over memoryless and ISI channels
Uses vector-matrix multiplication and Mailman algorithm for acceleration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses vector-matrix multiplication for likelihood computation
Applies Mailman algorithm to reduce time complexity
Achieves ML decoding via maximum entry selection
🔎 Similar Papers
No similar papers found.