🤖 AI Summary
Quantum error correction decoding has long faced a trade-off between accuracy and efficiency: classical minimum-weight perfect matching (MWPM) lacks generalizability and incurs high computational cost; tensor-network decoders achieve high accuracy but suffer from poor scalability; existing neural decoders improve throughput at the expense of fidelity. This work introduces the first learned decoder achieving near-maximum-likelihood (ML) accuracy with linear scalability. Our contributions are threefold: (1) a stabilizer-aware dual-stream Transformer architecture jointly modeling syndrome and logical-qubit features; (2) a differentiable logical error rate loss function directly optimizing the target metric; and (3) integration of asymmetric attention, finite-field smooth approximations, and constraint-aware post-processing. On toric codes, our decoder achieves error thresholds of 10.99% and 18.6% for bit- and phase-flip noise, respectively—approaching the theoretical ML limits (11.0% and 18.9%). It outperforms state-of-the-art methods in accuracy, throughput, and parameter efficiency.
📝 Abstract
Quantum Error Correction (QEC) decoding faces a fundamental accuracy-efficiency tradeoff. Classical methods like Minimum Weight Perfect Matching (MWPM) exhibit variable performance across noise models and suffer from polynomial complexity, while tensor network decoders achieve high accuracy but at prohibitively high computational cost. Recent neural decoders reduce complexity but lack the accuracy needed to compete with computationally expensive classical methods. We introduce SAQ-Decoder, a unified framework combining transformer-based learning with constraint aware post-processing that achieves both near Maximum Likelihood (ML) accuracy and linear computational scalability with respect to the syndrome size. Our approach combines a dual-stream transformer architecture that processes syndromes and logical information with asymmetric attention patterns, and a novel differentiable logical loss that directly optimizes Logical Error Rates (LER) through smooth approximations over finite fields. SAQ-Decoder achieves near-optimal performance, with error thresholds of 10.99% (independent noise) and 18.6% (depolarizing noise) on toric codes that approach the ML bounds of 11.0% and 18.9% while outperforming existing neural and classical baselines in accuracy, complexity, and parameter efficiency. Our findings establish that learned decoders can simultaneously achieve competitive decoding accuracy and computational efficiency, addressing key requirements for practical fault-tolerant quantum computing systems.