🤖 AI Summary
This work addresses the challenges in black-box combinatorial optimization, where objective function evaluations are expensive and conventional binary encodings struggle to preserve neighborhood structures of non-binary solution representations—such as permutations—leading to inefficient search and frequent infeasible solutions. To overcome these limitations, the authors propose a method that integrates a binary autoencoder (bAE) with Factorization Machine Quantum Annealing (FMQA). The bAE learns a compact binary latent representation from feasible solutions, aligning Hamming distances in the latent space with the original problem’s distance metric to yield a smoother neighborhood structure. FMQA then efficiently solves a QUBO model within this learned latent space. Experiments on the Traveling Salesman Problem demonstrate that the approach significantly outperforms handcrafted encodings, rapidly yielding high-quality feasible solutions under limited evaluation budgets while maintaining feasibility throughout and mitigating premature convergence to local optima.
📝 Abstract
In black-box combinatorial optimization, objective evaluations are often expensive, so high quality solutions must be found under a limited budget. Factorization machine with quantum annealing (FMQA) builds a quadratic surrogate model from evaluated samples and optimizes it on an Ising machine. However, FMQA requires binary decision variables, and for nonbinary structures such as integer permutations, the choice of binary encoding strongly affects search efficiency. If the encoding fails to reflect the original neighborhood structure, small Hamming moves may not correspond to meaningful modifications in the original solution space, and constrained problems can yield many infeasible candidates that waste evaluations. Recent work combines FMQA with a binary autoencoder (bAE) that learns a compact binary latent code from feasible solutions, yet the mechanism behind its performance gains is unclear. Using a small traveling salesman problem as an interpretable testbed, we show that the bAE reconstructs feasible tours accurately and, compared with manually designed encodings at similar compression, better aligns tour distances with latent Hamming distances, yields smoother neighborhoods under small bit flips, and produces fewer local optima. These geometric properties explain why bAE+FMQA improves the approximation ratio faster while maintaining feasibility throughout optimization, and they provide guidance for designing latent representations for black-box optimization.