🤖 AI Summary
To address the high training cost and excessive quantum processing unit (QPU) time consumption of Quantum Boltzmann Machines (QBMs) in the Noisy Intermediate-Scale Quantum (NISQ) era, this work proposes a supervised QBM training framework leveraging parallel quantum annealing. Exploiting quantum annealing’s intrinsic capability to sample from Boltzmann distributions, we design an enhanced parallel annealing strategy that simultaneously updates multiple weight parameter sets within a single annealing run—thereby substantially reducing qubit overhead and QPU invocation count. Evaluated on the MedMNIST medical image classification benchmark, our model achieves accuracy comparable to same-scale classical convolutional neural networks (CNNs), while reducing training epochs by ~60% and cutting QPU runtime by nearly 70% relative to conventional quantum annealing approaches. This work presents the first efficient, supervised QBM training framework tailored to real-world clinical applications, establishing a novel paradigm for practical quantum machine learning on NISQ devices.
📝 Abstract
Exploiting the fact that samples drawn from a quantum annealer inherently follow a Boltzmann-like distribution, annealing-based Quantum Boltzmann Machines (QBMs) have gained increasing popularity in the quantum research community. While they harbor great promises for quantum speed-up, their usage currently stays a costly endeavor, as large amounts of QPU time are required to train them. This limits their applicability in the NISQ era. Following the idea of Noè et al. (2024), who tried to alleviate this cost by incorporating parallel quantum annealing into their unsupervised training of QBMs, this paper presents an improved version of parallel quantum annealing that we employ to train QBMs in a supervised setting. Saving qubits to encode the inputs, the latter setting allows us to test our approach on medical images from the MedMNIST data set (Yang et al., 2023), thereby moving closer to real-world applicability of the technology. Our experiments show that QBMs using our approach already achieve reasonable results, comparable to those of similarly-sized Convolutional Neural Networks (CNNs), with markedly smaller numbers of epochs than these classical models. Our parallel annealing technique leads to a speed-up of almost 70 % compared to regular annealing-based BM executions.