🤖 AI Summary
To address the poor generalization of quantum machine learning (QML) under scarce labeled data, this paper introduces the first end-to-end contrastive pretraining framework implemented natively on trapped-ion quantum hardware. Methodologically, images are encoded as quantum states, and a self-supervised contrastive loss—computed directly on hardware via quantum state overlap—is employed for representation learning, followed by downstream classification fine-tuning. The key contributions are: (i) the first hardware-native, label-free contrastive pretraining paradigm for QML; and (ii) learned quantum representations that exhibit strong robustness in few-shot settings, significantly improving mean classification accuracy while reducing runtime variance. Experimental results demonstrate enhanced learning efficiency and stability of QML models, establishing a novel pathway for efficient, quantum-native representation learning.
📝 Abstract
Quantum machine learning (QML) has attracted growing interest with the rapid parallel advances in large-scale classical machine learning and quantum technologies. Similar to classical machine learning, QML models also face challenges arising from the scarcity of labeled data, particularly as their scale and complexity increase. Here, we introduce self-supervised pretraining of quantum representations that reduces reliance on labeled data by learning invariances from unlabeled examples. We implement this paradigm on a programmable trapped-ion quantum computer, encoding images as quantum states. In situ contrastive pretraining on hardware yields a representation that, when fine-tuned, classifies image families with higher mean test accuracy and lower run-to-run variability than models trained from random initialization. Performance improvement is especially significant in regimes with limited labeled training data. We show that the learned invariances generalize beyond the pretraining image samples. Unlike prior work, our pipeline derives similarity from measured quantum overlaps and executes all training and classification stages on hardware. These results establish a label-efficient route to quantum representation learning, with direct relevance to quantum-native datasets and a clear path to larger classical inputs.