🤖 AI Summary
This work addresses the efficient decoding of binary balanced linear codes under preprocessing, aiming to enhance tolerance against random errors at low noise rates. We propose a local optimization decoding framework based on sampling short dual codewords: during preprocessing, polynomial-length auxiliary information—dependent solely on the code—is generated; during decoding, a random matrix composed of short dual codewords is constructed to estimate the Hamming distance from the received word to the code space, followed by a threshold-based decision rule that guides correction by Hamming weight. Our method achieves an error-correction radius of $O((log n)^2 / n)$, matching the asymptotically optimal bound. The results not only improve the decoding performance of random linear codes but also expose previously overlooked practical security risks for cryptographic primitives such as the Learning Parity with Noise (LPN) problem under realistic parameter regimes.
📝 Abstract
Prange's information set algorithm is a decoding algorithm for arbitrary linear codes. It decodes corrupted codewords of any $mathbb{F}_2$-linear code $C$ of message length $n$ up to relative error rate $O(log n / n)$ in $mathsf{poly}(n)$ time. We show that the error rate can be improved to $O((log n)^2 / n)$, provided: (1) the decoder has access to a polynomial-length advice string that depends on $C$ only, and (2) $C$ is $n^{-Ω(1)}$-balanced.
As a consequence we improve the error tolerance in decoding random linear codes if inefficient preprocessing of the code is allowed. This reveals potential vulnerabilities in cryptographic applications of Learning Noisy Parities with low noise rate.
Our main technical result is that the Hamming weight of $Hw$, where $H$ is a random sample of *short dual* codewords, measures the proximity of a word $w$ to the code in the regime of interest. Given such $H$ as advice, our algorithm corrects errors by locally minimizing this measure. We show that for most codes, the error rate tolerated by our decoder is asymptotically optimal among all algorithms whose decision is based on thresholding $Hw$ for an arbitrary polynomial-size advice matrix $H$.