🤖 AI Summary
This work investigates the feasibility of reducing the decoding problem for positive-rate linear codes to the Learning Parity with Noise (LPN) problem. Addressing the lack of systematic parameter characterization in prior reductions under positive-rate regimes, the authors introduce the first reduction framework based on *code smoothing*, integrating information-theoretic analysis with computational complexity reduction techniques. They rigorously characterize the interdependence among decoding hardness, code rate, noise rate, and LPN dimension. The paper establishes necessary and sufficient parameter conditions for the reduction to hold: efficient reductions are provably impossible under typical positive rates (e.g., constant rate) and reasonable noise levels; they exist only when the noise decays exponentially in the code length. This work precisely delineates the theoretical boundary of reducibility between these two fundamental problems, providing a critical criterion for security analyses of coding-based cryptographic constructions.
📝 Abstract
The Learning Parity with Noise (LPN) problem underlies several classic cryptographic primitives. Researchers have endeavored to demonstrate the algorithmic difficulty of this problem by attempting to find a reduction from the decoding problem of linear codes, for which several hardness results exist. Earlier studies used code smoothing as a technical tool to achieve such reductions, showing that they are possible for codes with vanishing rate. This has left open the question of attaining a reduction with positive-rate codes. Addressing this case, we characterize the efficiency of the reduction in terms of the parameters of the decoding and LPN problems. As a conclusion, we isolate the parameter regimes for which a meaningful reduction is possible and the regimes for which its existence is unlikely.