🤖 AI Summary
This work addresses the challenge of efficiently solving combinatorial optimization problems on non-ideal nanoscale devices by implementing a Probabilistic Approximate Optimization Algorithm (PAOA) on a 64×64 periphery-gated single-photon avalanche diode (pgSPAD) array. The approach models the optimization landscape as a variational structure and iteratively learns circuit parameters through sampling, directly leveraging—rather than calibrating out—intrinsic device stochasticity, such as activation function bias. This enables calibration-free optimization on inherently noisy nanohardware for the first time. Fabricated in 0.35 μm CMOS technology, the pgSPAD chip employs a Gompertz-type asymmetric activation function and achieves high approximation ratios on 26-spin Sherrington–Kirkpatrick problems using up to 17 layers (2p parameters), with hardware inference results closely matching CPU-based simulations.
📝 Abstract
Combinatorial optimization problems are central to science and engineering and specialized hardware from quantum annealers to classical Ising machines are being actively developed to address them. These systems typically sample from a fixed energy landscape defined by the problem Hamiltonian encoding the discrete optimization problem. The recently introduced Probabilistic Approximate Optimization Algorithm (PAOA) takes a different approach: it treats the optimization landscape itself as variational, iteratively learning circuit parameters from samples. Here, we demonstrate PAOA on a 64$\times$64 perimeter-gated single-photon avalanche diode (pgSPAD) array fabricated in 0.35 $\mu$m CMOS, the first realization of the algorithm using intrinsically stochastic nanodevices. Each p-bit exhibits a device-specific, asymmetric (Gompertz-type) activation function due to dark-count variability. Rather than calibrating devices to enforce a uniform symmetric (logistic/tanh) activation, PAOA learns around device variations, absorbing residual activation and other mismatches into the variational parameters. On canonical 26-spin Sherrington-Kirkpatrick instances, PAOA achieves high approximation ratios with $2p$ parameters ($p$ up to 17 layers), and pgSPAD-based inference closely tracks CPU simulations. These results show that variational learning can accommodate the non-idealities inherent to nanoscale devices, suggesting a practical path toward larger-scale, CMOS-compatible probabilistic computers.