Decoding Balanced Linear Codes With Preprocessing

📅 2025-10-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the efficient decoding of binary balanced linear codes under preprocessing, aiming to enhance tolerance against random errors at low noise rates. We propose a local optimization decoding framework based on sampling short dual codewords: during preprocessing, polynomial-length auxiliary information—dependent solely on the code—is generated; during decoding, a random matrix composed of short dual codewords is constructed to estimate the Hamming distance from the received word to the code space, followed by a threshold-based decision rule that guides correction by Hamming weight. Our method achieves an error-correction radius of $O((log n)^2 / n)$, matching the asymptotically optimal bound. The results not only improve the decoding performance of random linear codes but also expose previously overlooked practical security risks for cryptographic primitives such as the Learning Parity with Noise (LPN) problem under realistic parameter regimes.

Technology Category

Application Category

📝 Abstract
Prange's information set algorithm is a decoding algorithm for arbitrary linear codes. It decodes corrupted codewords of any $mathbb{F}_2$-linear code $C$ of message length $n$ up to relative error rate $O(log n / n)$ in $mathsf{poly}(n)$ time. We show that the error rate can be improved to $O((log n)^2 / n)$, provided: (1) the decoder has access to a polynomial-length advice string that depends on $C$ only, and (2) $C$ is $n^{-Ω(1)}$-balanced. As a consequence we improve the error tolerance in decoding random linear codes if inefficient preprocessing of the code is allowed. This reveals potential vulnerabilities in cryptographic applications of Learning Noisy Parities with low noise rate. Our main technical result is that the Hamming weight of $Hw$, where $H$ is a random sample of *short dual* codewords, measures the proximity of a word $w$ to the code in the regime of interest. Given such $H$ as advice, our algorithm corrects errors by locally minimizing this measure. We show that for most codes, the error rate tolerated by our decoder is asymptotically optimal among all algorithms whose decision is based on thresholding $Hw$ for an arbitrary polynomial-size advice matrix $H$.
Problem

Research questions and friction points this paper is trying to address.

Improving error tolerance in decoding balanced linear codes with preprocessing
Enhancing decoding algorithms using polynomial-length advice strings
Analyzing vulnerabilities in cryptographic applications of noisy parity learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Preprocessing advice improves decoding error rate
Hamming weight measures proximity for error correction
Locally minimizes weight for optimal error tolerance
🔎 Similar Papers
No similar papers found.
A
Andrej Bogdanov
University of Ottawa
R
Rohit Chatterjee
Department of Computer Science, National University of Singapore
Yunqi Li
Yunqi Li
Rutgers University
Machine LearningInformation RetrievalRecommender SystemTrustworthy AI
Prashant Nalini Vasudevan
Prashant Nalini Vasudevan
National University of Singapore
CryptographyComplexity Theory