🤖 AI Summary
This work addresses the frequency degradation and interconnect congestion (O(N²) complexity) arising from fully connected architectures in large-scale Ising machine hardware implementations. We propose a systematic graph sparsification method based on node replication, enabling scalable hardware deployment while preserving computational fidelity. Our contributions are threefold: (1) a sparsification framework jointly optimizing hardware scalability and optimization accuracy, achieving constant-time update latency for the first time; (2) theoretical and empirical analysis revealing how constraint relaxation mitigates convergence overhead; and (3) a novel modeling paradigm tailored to naturally sparse problems (e.g., reversible logic). Evaluated via probabilistic bit circuits in ASAP7 and FPGA prototypes, our approach reduces interconnect complexity to O(N), sustains constant update frequency across platforms, and incurs only controlled convergence-time overhead under approximate solving—outperforming dense baselines.
📝 Abstract
In recent years, hardware implementations of Ising machines have emerged as a viable alternative to quantum computing for solving hard optimization problems among other applications. Unlike quantum hardware, dense connectivity can be achieved in classical systems. However, we show that dense connectivity leads to severe frequency slowdowns and interconnect congestion scaling unfavorably with system sizes. As a scalable solution, we propose a systematic sparsification method for dense graphs by introducing copy nodes to limit the number of neighbors per graph node. In addition to solving interconnect congestion, this approach enables constant frequency scaling where all spins in a network can be updated in constant time. On the other hand, sparsification introduces new difficulties, such as constraint-breaking between copied spins and increased convergence times to solve optimization problems, especially if exact ground states are sought. Relaxing the exact solution requirements, we find that the overheads in convergence times to be more mild. We demonstrate these ideas by designing probabilistic bit Ising machines using ASAP7 process design kits as well as Field Programmable Gate Array (FPGA)-based implementations. Finally, we show how formulating problems in naturally sparse networks (e.g., by invertible logic) sidesteps challenges introduced by sparsification methods. Our results are applicable to a broad family of Ising machines using different hardware implementations.