🤖 AI Summary
Hypergraph product (HGP) quantum low-density parity-check (LDPC) codes exhibit suboptimal error-correction performance under the quantum erasure channel (i.e., qubit loss).
Method: This paper proposes a data-driven code construction optimization framework. It systematically integrates deep reinforcement learning with simulated annealing, augmented by random-walk-based structural sampling and maximum-likelihood decoding feedback, enabling end-to-end optimization of code structure—without relying on predefined geometric constraints (e.g., large girth) or structural assumptions.
Contribution/Results: The optimized codes significantly outperform state-of-the-art HGP codes under the quantum erasure channel, achieving a 1–2 order-of-magnitude reduction in logical error rate. This demonstrates the effectiveness, robustness, and scalability of data-driven paradigms for quantum code design.
📝 Abstract
Hypergraph products are quantum low-density parity-check (LDPC) codes constructed from two classical LDPC codes. Although their dimension and distance depend only on the parameters of the underlying classical codes, optimizing their performance against various noise channels remains challenging. This difficulty partly stems from the complexity of decoding in the quantum setting. The standard, ad hoc approach typically involves selecting classical LDPC codes with large girth. In this work, we focus on optimizing performance against the quantum erasure channel. A key advantage of this channel is the existence of an efficient maximum-likelihood decoder, which enables us to employ optimization techniques based on sampling random codes, such as Reinforcement Learning (RL) and Simulated Annealing (SA). Our results indicate that these techniques improve performance relative to the state-of-the-art.