🤖 AI Summary
Quantum stabilizer codes suffer from frequent syndrome measurement errors under non-ideal measurements, causing conventional decoders to fail.
Method: This paper proposes a generalized data-syndrome joint coding framework, introducing for the first time a generalized parity-check matrix integrating quaternary and binary alphabets, along with its corresponding hybrid-variable Tanner graph, to jointly model data and syndrome errors.
Contribution/Results: We theoretically prove that for a toric code of distance $d$, only $d$ rounds of syndrome extraction suffice to guarantee logical fidelity, and we derive the quantitative trade-off between repetition count and physical error rate. Leveraging this model, we design a lightweight belief-propagation (BP) decoder—requiring no post-processing—that achieves a fault-tolerance threshold exceeding 3% in quantum memory tasks on rotated toric codes. Remarkably, it remains efficient even under high syndrome error rates, and empirically demonstrates reliable correction using only a single syndrome measurement—the first such result.
📝 Abstract
Quantum stabilizer codes often struggle with syndrome errors due to measurement imperfections. Typically, multiple rounds of syndrome extraction are employed to ensure reliable error information. In this paper, we consider phenomenological decoding problems, where data qubit errors may occur between extractions, and each measurement can be faulty. We introduce generalized quantum data-syndrome codes along with a generalized check matrix that integrates both quaternary and binary alphabets to represent diverse error sources. This results in a Tanner graph with mixed variable nodes, enabling the design of belief propagation (BP) decoding algorithms that effectively handle phenomenological errors. Importantly, our BP decoders are applicable to general sparse quantum codes. Through simulations, we achieve an error threshold of more than 3% for quantum memory protected by rotated toric codes, using solely BP without post-processing. Our results indicate that $d$ rounds of syndrome extraction are sufficient for a toric code of distance $d$. We observe that at high error rates, fewer rounds of syndrome extraction tend to perform better, while more rounds improve performance at lower error rates. Additionally, we propose a method to construct effective redundant stabilizer checks for single-shot error correction. Our simulations show that BP decoding remains highly effective even with a high syndrome error rate.