🤖 AI Summary
Quantum annealing (QA) hardware suffers from high cost and intractability of end-to-end training due to the absence of differentiable quantum dynamics. Method: This paper proposes a trainable quantum combinatorial optimizer integrating deep unrolling with classical-to-quantum transfer learning. It decouples training and inference by first learning parameters on classical hardware via deep unrolling, then transferring the trained model to QA devices for execution—bypassing the fundamental limitation of non-differentiability in QA. Contribution/Results: The approach enables efficient parameter optimization without requiring gradient-based training on quantum hardware, significantly improving training efficiency and QA resource utilization. Experiments demonstrate faster convergence and higher execution efficiency compared to conventional sampling-based solvers, while maintaining scalability to larger-scale combinatorial optimization problems.
📝 Abstract
Quantum annealing (QA) has attracted research interest as a sampler and combinatorial optimization problem (COP) solver. A recently proposed sampling-based solver for QA significantly reduces the required number of qubits, being capable of large COPs. In relation to this, a trainable sampling-based COP solver has been proposed that optimizes its internal parameters from a dataset by using a deep learning technique called deep unfolding. Although learning the internal parameters accelerates the convergence speed, the sampler in the trainable solver is restricted to using a classical sampler owing to the training cost. In this study, to utilize QA in the trainable solver, we propose classical-quantum transfer learning, where parameters are trained classically, and the trained parameters are used in the solver with QA. The results of numerical experiments demonstrate that the trainable quantum COP solver using classical-quantum transfer learning improves convergence speed and execution time over the original solver.