π€ AI Summary
Multi-bit parallel readout in superconducting quantum processors is hindered by crosstalk under frequency multiplexing and the high computational overhead and poor scalability of conventional neural network approaches. This work proposes a lightweight signal processing paradigm based on polynomial reservoir computing (PRC): it eliminates nonlinear activation functions, employs polynomial feature mapping with a linear reservoir architecture, and enables hardware-friendly parallel computation, real-time incremental training, and online adaptability. To our knowledge, this is the first application of real-time trainable PRC to superconducting quantum readout. Experiments demonstrate a 50% reduction in single-qubit readout error and an 11% reduction in five-qubit readout error, along with a 2.5Γ suppression of inter-qubit crosstalk. Moreover, inference requires 100Γ fewer multiplications than mainstream machine learning methods for single-qubit readout and 2.5Γ fewer for five-qubit readout.
π Abstract
Quantum processors require rapid and high-fidelity simultaneous measurements of many qubits. While superconducting qubits are among the leading modalities toward a useful quantum processor, their readout remains a bottleneck. Traditional approaches to processing measurement data often struggle to account for crosstalk present in frequency-multiplexed readout, the preferred method to reduce the resource overhead. Recent approaches to address this challenge use neural networks to improve the state-discrimination fidelity. However, they are computationally expensive to train and evaluate, resulting in increased latency and poor scalability as the number of qubits increases. We present an alternative machine learning approach based on next-generation reservoir computing that constructs polynomial features from the measurement signals and maps them to the corresponding qubit states. This method is highly parallelizable, avoids the costly nonlinear activation functions common in neural networks, and supports real-time training, enabling fast evaluation, adaptability, and scalability. Despite its lower computational complexity, our reservoir approach is able to maintain high qubit-state-discrimination fidelity. Relative to traditional methods, our approach achieves error reductions of up to 50% and 11% on single- and five-qubit datasets, respectively, and delivers up to 2.5x crosstalk reduction on the five-qubit dataset. Compared with recent machine-learning methods, evaluating our model requires 100x fewer multiplications for single-qubit and 2.5x fewer for five-qubit models. This work demonstrates that reservoir computing can enhance qubit-state discrimination while maintaining scalability for future quantum processors.