🤖 AI Summary
This work resolves the fundamental question of whether Reed–Solomon (RS) codes over small fields—of size $O(n)$—can achieve list-decoding capacity. We introduce a “random puncturing” construction and integrate hypergraph modeling, symmetry analysis, and probabilistic methods to establish, for the first time, the existence of RS codes over fields of size $O(n)$ that attain capacity-approaching list decoding. We determine the optimal dependence of list size on the error gap $varepsilon$ as $O(1/varepsilon)$. Moreover, we uncover a novel connection between this result and the GM-MDS theorem via a hypergraph lens. As a corollary, we show that random linear codes over fields of size $2^{O(1/varepsilon^2)}$ already achieve the optimal list size $O(1/varepsilon)$, thereby breaking a long-standing theoretical barrier that precluded such performance over constant-size fields.
📝 Abstract
Reed–Solomon codes are a classic family of error-correcting codes consisting of evaluations of low-degree polynomials over a finite field on some sequence of distinct field elements. They are widely known for their optimal unique-decoding capabilities, but their list-decoding capabilities are not fully understood. Given the prevalence of Reed-Solomon codes, a fundamental question in coding theory is determining if Reed–Solomon codes can optimally achieve list-decoding capacity. A recent breakthrough by Brakensiek, Gopi, and Makam, established that Reed–Solomon codes are combinatorially list-decodable all the way to capacity. However, their results hold for randomly-punctured Reed–Solomon codes over an exponentially large field size 2O(n), where n is the block length of the code. A natural question is whether Reed–Solomon codes can still achieve capacity over smaller fields. Recently, Guo and Zhang showed that Reed–Solomon codes are list-decodable to capacity with field size O(n2). We show that Reed–Solomon codes are list-decodable to capacity with linear field size O(n), which is optimal up to the constant factor. We also give evidence that the ratio between the alphabet size q and code length n cannot be bounded by an absolute constant. Our techniques also show that random linear codes are list-decodable up to (the alphabet-independent) capacity with optimal list-size O(1/ε) and near-optimal alphabet size 2O(1/ε2), where ε is the gap to capacity. As far as we are aware, list-decoding up to capacity with optimal list-size O(1/ε) was not known to be achievable with any linear code over a constant alphabet size (even non-constructively), and it was also not known to be achievable for random linear codes over any alphabet size. Our proofs are based on the ideas of Guo and Zhang, and we additionally exploit symmetries of reduced intersection matrices. With our proof, which maintains a hypergraph perspective of the list-decoding problem, we include an alternate presentation of ideas from Brakensiek, Gopi, and Makam that more directly connects the list-decoding problem to the GM-MDS theorem via a hypergraph orientation theorem.