🤖 AI Summary
This work addresses the stringent requirement of superconducting quantum computing that quantum error correction decoding must be completed within 1 microsecond—a challenge unmet by existing approaches due to the trade-off between low latency and high accuracy. The paper presents the first FPGA-accelerated architecture tailored for graph neural network (GNN)-based decoders, integrating hardware-aware model optimization with a low-latency decoding algorithm. Evaluated on surface codes with distance d ≤ 7, the proposed system achieves end-to-end decoding latency below 1 microsecond while attaining a logical error rate superior to current state-of-the-art methods, thereby significantly advancing the hardware deployment of practical quantum error correction.
📝 Abstract
Quantum computers have the potential to solve certain complex problems in a much more efficient way than classical computers. Nevertheless, current quantum computer implementations are limited by high physical error rates. This issue is addressed by Quantum Error Correction (QEC) codes, which use multiple physical qubits to form a logical qubit to achieve a lower logical error rate, with the surface code being one of the most commonly used. The most time-critical step in this process is interpreting the measurements of the physical qubits to determine which errors have most likely occurred - a task called decoding. Consequently, the main challenge for QEC is to achieve error correction with high accuracy within the tight $1μs$ decoding time budget imposed by superconducting qubits. State-of-the-art QEC approaches trade accuracy for latency. In this work, we propose an FPGA accelerator for a Neural Network based decoder as a way to achieve a lower logical error rate than current methods within the tight time constraint, for code distance up to d=7. We achieved this goal by applying different hardware-aware optimizations to a high-accuracy GNN-based decoder. In addition, we propose several accelerator optimizations leading to the FPGA-based decoder achieving a latency smaller than $1μs$, with a lower error rate compared to the state-of-the-art.