🤖 AI Summary
This work addresses the suboptimality of the conventional hashing bound under general Pauli channels, which is typically loose and hinders the achievement of optimal coding rates. Moving beyond the limitation of relying solely on repetition codes, the paper proposes a universal channel transformation framework based on arbitrary stabilizer codes. By employing a symplectic representation, the framework precisely characterizes the joint distribution of logical errors and syndromes induced by physical Pauli noise. Leveraging side information available to the decoder, the hashing bound is reapplied within this transformed setting to enhance achievable rates. Through small-scale structured searches over a class of skewed and independent Pauli channels, the authors identify multiple instances where the achievable rates surpass the standard hashing bound, thereby demonstrating the effectiveness and superiority of the proposed approach.
📝 Abstract
The quantum hashing bound guarantees that rates up to $1-H(p_I, p_X, p_Y, p_Z)$ are achievable for memoryless Pauli channels, but it is not generally tight. A known way to improve achievable rates for certain asymmetric Pauli channels is to apply a small inner stabilizer code to a few channel uses, decode, and treat the resulting logical noise as an induced Pauli channel; reapplying the hashing argument to this induced channel can beat the baseline hashing bound. We generalize this induced-channel viewpoint to arbitrary stabilizer codes used purely as channel transforms. Given any $ [\![ n, k ]\!] $ stabilizer generator set, we construct a full symplectic tableau, compute the induced joint distribution of logical Pauli errors and syndromes under the physical Pauli channel, and obtain an achievable rate via a hashing bound with decoder side information. We perform a structured search over small transforms and report instances that improve the baseline hashing bound for a family of Pauli channels with skewed and independent errors studied in prior work.