🤖 AI Summary
To address the challenges of low readout fidelity, high latency, and hardware deployment complexity in superconducting qubit measurement, this work proposes an end-to-end software–hardware co-design machine learning workflow. We present the first full-stack deployment—from quantization-aware training to ultra-low-latency FPGA inference—of ML models on the QICK-RFSoC platform. Leveraging hls4ml and a custom lightweight neural network, the model is synthesized and embedded onto a Xilinx RFSoC FPGA. The deployed solution achieves 96% single-shot readout fidelity with only 32 ns inference latency and consumes less than 16% of lookup table (LUT) resources, significantly enhancing scalability and real-time capability. Our core contribution is the establishment of the first ML–FPGA integration paradigm specifically tailored for superconducting qubit readout, featuring one-click, seamless software–hardware co-deployment.
📝 Abstract
We present an end-to-end workflow for superconducting qubit readout that embeds co-designed Neural Networks (NNs) into the Quantum Instrumentation Control Kit (QICK). Capitalizing on the custom firmware and software of the QICK platform, which is built on Xilinx RFSoC FPGAs, we aim to leverage machine learning (ML) to address critical challenges in qubit readout accuracy and scalability. The workflow utilizes the hls4ml package and employs quantization-aware training to translate ML models into hardware-efficient FPGA implementations via user-friendly Python APIs. We experimentally demonstrate the design, optimization, and integration of an ML algorithm for single transmon qubit readout, achieving 96% single-shot fidelity with a latency of 32ns and less than 16% FPGA look-up table resource utilization. Our results offer the community an accessible workflow to advance ML-driven readout and adaptive control in quantum information processing applications.