๐ค AI Summary
Quantum bits are highly susceptible to environmental noise, necessitating low-latency, scalable decoders for surface-code quantum error correction. While existing neural-network decoders achieve high pseudothresholds and good scalability, their serial execution fails to meet the stringent <440 ns real-time decoding requirement. This work proposes the first hardware-level parallelizable, fully feedforward neural network (FFNN) surface-code decoder, implemented on a compute-in-memory (CIM) architecture to jointly model the syndrome graph and perform high-order decoding mapping. Evaluated across code distances 3โ9, the decoder achieves latency of 197โ252 nsโfully compliant with the 440 ns constraintโand attains pseudothresholds of 10.4%โ12.0%, approaching the theoretical threshold of 14.22%. By eliminating the serial bottleneck, this work pioneers the parallel deployment of an end-to-end FFNN decoder, establishing a new real-time decoding paradigm for large-scale fault-tolerant quantum computing.
๐ Abstract
In all types of surface code decoders, fully neural network-based high-level decoders offer decoding thresholds that surpass decoder-Minimum Weight Perfect Matching (MWPM), and exhibit strong scalability, making them one of the ideal solutions for addressing surface code challenges. However, current fully neural network-based high-level decoders can only operate serially and do not meet the current latency requirements (below 440 ns). To address these challenges, we first propose a parallel fully feedforward neural network (FFNN) high-level surface code decoder, and comprehensively measure its decoding performance on a computing-in-memory (CIM) hardware simulation platform. With the currently available hardware specifications, our work achieves a decoding threshold of 14.22%, and achieves high pseudo-thresholds of 10.4%, 11.3%, 12%, and 11.6% with decoding latencies of 197.03 ns, 234.87 ns, 243.73 ns, and 251.65 ns for distances of 3, 5, 7 and 9, respectively.