Maximizing Qubit Throughput under Buffer Decoherence and Variability in Generation

πŸ“… 2026-03-26
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This study addresses the fundamental trade-off between throughput and fidelity in quantum communication networks, where qubits suffer from generation delays and buffer-induced decoherence. The authors formulate this challenge for the first time as an admission control problem in a finite-buffer queue, with rewards decaying as a function of waiting time. They theoretically characterize the optimality conditions for a β€œzero-delay” policy and propose an adaptive scheduling framework based on Bayesian online learning to handle unknown system parameters. The proposed approach significantly enhances throughput while preserving high fidelity, and its design principles are extensible to other delay-sensitive Internet-of-Things sensing and service systems.

Technology Category

Application Category

πŸ“ Abstract
Quantum communication networks require transmission of high-fidelity, uncoded qubits for applications such as entanglement distribution and quantum key distribution. However, current implementations are constrained by limited buffer capacity and qubit decoherence, which degrades qubit quality while waiting in the buffer. A key challenge arises from the stochastic nature of qubit generation, there exists a random delay (D) between the initiation of a generation request and the availability of the qubit. This induces a fundamental trade off early initiation increases buffer waiting time and hence decoherence, whereas delayed initiation leads to server idling and reduced throughput. We model this system as an admission control problem in a finite buffer queue, where the reward associated with each job is a decreasing function of its sojourn time. We derive analytical conditions under which a simple "no lag" policy where a new qubit is generated immediately upon the availability of buffer space is optimal. To address scenarios with unknown system parameters, we further develop a Bayesian learning framework that adaptively optimizes the admission policy. In addition to quantum communication systems, the proposed model is applicable to delay sensitive IoT sensing and service systems.
Problem

Research questions and friction points this paper is trying to address.

qubit throughput
buffer decoherence
generation variability
quantum communication networks
stochastic generation delay
Innovation

Methods, ideas, or system contributions that make the work stand out.

admission control
qubit decoherence
Bayesian learning
throughput optimization
quantum communication networks
πŸ”Ž Similar Papers
No similar papers found.
P
Padma Priyanka
Department of Electrical Engineering, Indian Institute of Technology Madras, Chennai, India
A
Avhishek Chatterjee
Department of Electrical Engineering, Indian Institute of Technology Madras, Chennai, India
Sheetal Kalyani
Sheetal Kalyani
Professor, Electrical Engineering, IIT Madras
statistical learning theory and robust statisticsspecial functions6G communicationsdeep learningextreme value theory