🤖 AI Summary
This work addresses process variation-induced mismatch in probabilistic computing on CMOS chips. We propose a hardware-algorithm co-optimization methodology and fabricate a 0.44 mm² probabilistic computing chip integrating 440 spin units arranged in a Chimera topology, supporting logic modeling, full-adder synthesis, and MaxCut optimization. Key innovations include an on-chip hardware-aware contrastive divergence training algorithm, current-mode neuron circuits, co-layout of analog/digital standard cells, and a mixed-power-supply architecture—collectively enhancing process robustness without sacrificing integration density. Experimental results demonstrate superior performance in logic gate modeling, adder synthesis, and combinatorial optimization tasks. The chip achieves higher area efficiency and significantly improved process variation tolerance compared to conventional approaches. This work establishes a scalable, variation-resilient hardware paradigm for probabilistic computing in AI accelerators.
📝 Abstract
This paper demonstrates a probabilistic bit physics inspired solver with 440 spins configured in a Chimera graph, occupying an area of 0.44 mm^2. Area efficiency is maximized through a current-mode implementation of the neuron update circuit, standard cell design for analog blocks pitch-matched to digital blocks, and a shared power supply for both digital and analog components. Process variation related mismatches introduced by this approach are effectively mitigated using a hardware aware contrastive divergence algorithm during training. We validate the chip's ability to perform probabilistic computing tasks such as modeling logic gates and full adders, as well as optimization tasks such as MaxCut, demonstrating its potential for AI and machine learning applications.