SAQ: Stabilizer-Aware Quantum Error Correction Decoder

📅 2025-12-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Quantum error correction decoding has long faced a trade-off between accuracy and efficiency: classical minimum-weight perfect matching (MWPM) lacks generalizability and incurs high computational cost; tensor-network decoders achieve high accuracy but suffer from poor scalability; existing neural decoders improve throughput at the expense of fidelity. This work introduces the first learned decoder achieving near-maximum-likelihood (ML) accuracy with linear scalability. Our contributions are threefold: (1) a stabilizer-aware dual-stream Transformer architecture jointly modeling syndrome and logical-qubit features; (2) a differentiable logical error rate loss function directly optimizing the target metric; and (3) integration of asymmetric attention, finite-field smooth approximations, and constraint-aware post-processing. On toric codes, our decoder achieves error thresholds of 10.99% and 18.6% for bit- and phase-flip noise, respectively—approaching the theoretical ML limits (11.0% and 18.9%). It outperforms state-of-the-art methods in accuracy, throughput, and parameter efficiency.

Technology Category

Application Category

📝 Abstract
Quantum Error Correction (QEC) decoding faces a fundamental accuracy-efficiency tradeoff. Classical methods like Minimum Weight Perfect Matching (MWPM) exhibit variable performance across noise models and suffer from polynomial complexity, while tensor network decoders achieve high accuracy but at prohibitively high computational cost. Recent neural decoders reduce complexity but lack the accuracy needed to compete with computationally expensive classical methods. We introduce SAQ-Decoder, a unified framework combining transformer-based learning with constraint aware post-processing that achieves both near Maximum Likelihood (ML) accuracy and linear computational scalability with respect to the syndrome size. Our approach combines a dual-stream transformer architecture that processes syndromes and logical information with asymmetric attention patterns, and a novel differentiable logical loss that directly optimizes Logical Error Rates (LER) through smooth approximations over finite fields. SAQ-Decoder achieves near-optimal performance, with error thresholds of 10.99% (independent noise) and 18.6% (depolarizing noise) on toric codes that approach the ML bounds of 11.0% and 18.9% while outperforming existing neural and classical baselines in accuracy, complexity, and parameter efficiency. Our findings establish that learned decoders can simultaneously achieve competitive decoding accuracy and computational efficiency, addressing key requirements for practical fault-tolerant quantum computing systems.
Problem

Research questions and friction points this paper is trying to address.

Achieves near Maximum Likelihood accuracy with linear computational scalability
Combines transformer learning with constraint aware post-processing for quantum error correction
Addresses accuracy-efficiency tradeoff in decoding for fault-tolerant quantum computing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transformer-based learning with constraint aware post-processing
Dual-stream transformer with asymmetric attention patterns
Differentiable logical loss optimizing Logical Error Rates
D
David Zenati
School of Electrical and Computer Engineering, Ben-Gurion University of the Negev
Eliya Nachmani
Eliya Nachmani
Ben-Gurion University; Google Research
Deep LearningSpeechAudioSignal ProcessingInformation Theory