🤖 AI Summary
Current quantum hardware is constrained by limited qubit counts, hindering the solution of large-scale Ising-type combinatorial optimization problems. To address this, we propose a physics-inspired, graph neural network (GNN)-driven dynamic model compression framework. Our method employs self-supervised learning to predict spin alignment relationships and integrates progressive graph coarsening with a qubit-adaptive alignment-and-merging mechanism, enabling multi-level, quality-controllable Ising model compression while preserving the problem’s optimization structure. Crucially, it is the first approach to embed physical priors—specifically, Ising spin interaction principles—directly into the GNN architecture, enabling generalization across diverse graph topologies. Extensive evaluation on multiple Ising instances demonstrates that compressed problems can be efficiently solved on D-Wave quantum annealers with negligible degradation in solution quality (average performance loss <1.2%), thereby substantially extending the scale of problems solvable on near-term quantum hardware.
📝 Abstract
Hard combinatorial optimization problems, often mapped to Ising models, promise potential solutions with quantum advantage but are constrained by limited qubit counts in near-term devices. We present an innovative quantum-inspired framework that dynamically compresses large Ising models to fit available quantum hardware of different sizes. Thus, we aim to bridge the gap between large-scale optimization and current hardware capabilities. Our method leverages a physics-inspired GNN architecture to capture complex interactions in Ising models and accurately predict alignments among neighboring spins (aka qubits) at ground states. By progressively merging such aligned spins, we can reduce the model size while preserving the underlying optimization structure. It also provides a natural trade-off between the solution quality and size reduction, meeting different hardware constraints of quantum computing devices. Extensive numerical studies on Ising instances of diverse topologies show that our method can reduce instance size at multiple levels with virtually no losses in solution quality on the latest D-wave quantum annealers.