🤖 AI Summary
Long-distance continuous-variable quantum key distribution (CVQKD) suffers from high channel noise, severely limiting error-correction code rates and causing drastic key-rate degradation. To address this, we propose a reverse-reconciliation scheme based on random-codeword error correction—marking the first integration of advantage distillation and statistical decoupling in CVQKD. Specifically, we incorporate random codewords into the reverse-reconciliation framework to overcome the error-correction efficiency bottleneck under high noise; simultaneously, we enforce strict statistical independence between public communication data and the final secret key. Our theoretical analysis assumes Gaussian collective attacks and leverages information-theoretic capacity-approaching techniques. The resulting key rate significantly surpasses both the Devetak–Winter bound and the CV-PLOB bound, and even approaches the discrete-variable PLOB bound. This demonstrates a novel paradigm for joint optimization of error correction and key generation over large data blocks.
📝 Abstract
Continuous-Variable Quantum Key Distribution (CVQKD) at large distances has such high noise levels that the error-correcting code must have very low rate. In this regime it becomes feasible to implement random-codebook error correction, which is known to perform close to capacity. We propose a reverse reconciliation scheme for CVQKD in which the first step is advantage distillation based on random-codebook error correction operated above the Shannon limit. Our scheme has a novel way of achieving statistical decoupling between the public reconciliation data and the secret key. We provide an analysis of the secret key rate for the case of Gaussian collective attacks, and we present numerical results. The best performance is obtained when the message size exceeds the mutual information $I(X;Y)$ between Alice's quadratures $X$ and Bob's measurements $Y$, i.e. the Shannon limit. This somewhat counter-intuitive result is understood from a tradeoff between code rate and frame rejection rate, combined with the fact that error correction for QKD needs to reconcile only random data. We obtain secret key rates that lie far above the Devetak-Winter value $I(X;Y) - I(E;Y)$, which is the upper bound in the case of one-way error correction. Furthermore, our key rates lie above the PLOB bound for Continuous-Variable detection, but below the PLOB bound for Discrete-Variable detection.