🤖 AI Summary
Annealing machines (AMs) face inherent hardware-scale limitations, rendering them incapable of directly solving large-scale combinatorial optimization problems.
Method: This paper proposes the AM-GNN collaborative framework: AMs generate high-quality samples on small-scale subproblems to perform implicit knowledge distillation into a graph neural network (GNN), guiding it to learn structured node representations; subsequently, a multi-stage GNN initialization solver is constructed to tackle the original large-scale instance.
Contribution/Results: This work achieves the first end-to-end implicit knowledge transfer from AMs to GNNs, effectively bypassing the native capacity bottleneck of AMs. Experiments on standard combinatorial optimization benchmarks demonstrate that the method extends the effective solvable problem size of AMs by 3–5× while maintaining solution quality accuracy above 98%, thereby significantly balancing scalability and solution reliability.
📝 Abstract
While Annealing Machines (AM) have shown increasing capabilities in solving complex combinatorial problems, positioning themselves as a more immediate alternative to the expected advances of future fully quantum solutions, there are still scaling limitations. In parallel, Graph Neural Networks (GNN) have been recently adapted to solve combinatorial problems, showing competitive results and potentially high scalability due to their distributed nature. We propose a merging approach that aims at retaining both the accuracy exhibited by AMs and the representational flexibility and scalability of GNNs. Our model considers a compression step, followed by a supervised interaction where partial solutions obtained from the AM are used to guide local GNNs from where node feature representations are obtained and combined to initialize an additional GNN-based solver that handles the original graph's target problem. Intuitively, the AM can solve the combinatorial problem indirectly by infusing its knowledge into the GNN. Experiments on canonical optimization problems show that the idea is feasible, effectively allowing the AM to solve size problems beyond its original limits.