🤖 AI Summary
Single Ising machines (IMs) suffer from limited capacity and struggle to solve large-scale combinatorial optimization problems.
Method: We propose a parallel IM network architecture and its unified formal execution model. Leveraging a synchronized probabilistic convergence theory, we establish the first rigorous convergence guarantee for parallel IM systems under finite synchronization frequencies. We further uncover the fundamental trade-off between synchronization frequency and solution quality/efficiency, and derive hardware-aware parameter configuration heuristics.
Results: Numerical experiments validate the theoretical bounds and demonstrate practical efficacy. Our framework provides both theoretically grounded design principles and actionable engineering guidelines for deploying multi-IM cooperative systems—bridging the gap between theoretical analysis and hardware implementation.
📝 Abstract
Analog Ising machines (IMs) occupy an increasingly prominent area of computer architecture research, offering high-quality and low latency/energy solutions to intractable computing tasks. However, IMs have a fixed capacity, with little to no utility in out-of-capacity problems. Previous works have proposed parallel, multi-IM architectures to circumvent this limitation. In this work we theoretically and numerically investigate tradeoffs in parallel IM networks to guide researchers in this burgeoning field. We propose formal models of parallel IM excution models, then provide theoretical guarantees for probabilistic convergence. Numerical experiments illustrate our findings and provide empirical insight into high and low synchronization frequency regimes. We also provide practical heuristics for parameter/model selection, informed by our theoretical and numerical findings.