🤖 AI Summary
This work addresses the data scarcity challenge in quantum system characterization by proposing a physics-informed meta-learning framework designed to rapidly adapt to novel configurations of two-level systems (closed and open) and Heisenberg spin chains using minimal experimental data. Methodologically, it introduces a synergistic mechanism between adaptive learning rates and a global optimizer, integrating physics-driven parameter sharing with task-adaptive optimization to enhance generalization and robustness. Evaluated on real experimental data from Ge/Si nanowire Loss–DiVincenzo quantum dots, the framework achieves high-accuracy prediction of critical parameters—including g-factor and Rabi frequency—with significantly fewer samples than baseline models (e.g., Transformer, MLP) and state-of-the-art meta-learning approaches. It also demonstrates substantially improved computational efficiency. The proposed paradigm provides a scalable, physics-aware meta-learning solution for rapid characterization of quantum hardware.
📝 Abstract
While machine learning holds great promise for quantum technologies, most current methods focus on predicting or controlling a specific quantum system. Meta-learning approaches, however, can adapt to new systems for which little data is available, by leveraging knowledge obtained from previous data associated with similar systems. In this paper, we meta-learn dynamics and characteristics of closed and open two-level systems, as well as the Heisenberg model. Based on experimental data of a Loss-DiVincenzo spin-qubit hosted in a Ge/Si core/shell nanowire for different gate voltage configurations, we predict qubit characteristics i.e. $g$-factor and Rabi frequency using meta-learning. The algorithm we introduce improves upon previous state-of-the-art meta-learning methods for physics-based systems by introducing novel techniques such as adaptive learning rates and a global optimizer for improved robustness and increased computational efficiency. We benchmark our method against other meta-learning methods, a vanilla transformer, and a multilayer perceptron, and demonstrate improved performance.