Circuit realization and hardware linearization of monotone operator equilibrium networks

📅 2025-09-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Implementing trainable infinite-depth neural networks on analog hardware remains challenging due to physical constraints and gradient estimation inaccuracies. Method: This work proposes Monotone Operator Networks (MONs) based on resistive-diode circuits. First, it derives a physically realizable “diode ReLU” activation function from the non-ideal I–V characteristics of diodes to construct a monotone operator. Second, it introduces hardware linearization—a circuit-level technique enabling direct backward propagation of gradients, bypassing digital simulation and avoiding gradient estimation errors. Third, it establishes circuit-theoretic models and performs device-level simulations to verify end-to-end trainability for both feedforward and asymmetric cascaded architectures. Contribution/Results: Experiments demonstrate substantial improvements in compactness and training efficiency of analog neural networks. The framework supports extensible activation functions using diverse nonlinear devices, establishing a novel paradigm for fully analog, learnable neuromorphic hardware.

Technology Category

Application Category

📝 Abstract
It is shown that the port behavior of a resistor- diode network corresponds to the solution of a ReLU monotone operator equilibrium network (a neural network in the limit of infinite depth), giving a parsimonious construction of a neural network in analog hardware. We furthermore show that the gradient of such a circuit can be computed directly in hardware, using a procedure we call hardware linearization. This allows the network to be trained in hardware, which we demonstrate with a device-level circuit simulation. We extend the results to cascades of resistor-diode networks, which can be used to implement feedforward and other asymmetric networks. We finally show that different nonlinear elements give rise to different activation functions, and introduce the novel diode ReLU which is induced by a non-ideal diode model.
Problem

Research questions and friction points this paper is trying to address.

Realizing neural networks in analog hardware circuits
Computing gradients directly in hardware for training
Extending circuit implementations to various network architectures
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analog hardware implementation using resistor-diode networks
Hardware linearization enables direct gradient computation
Non-ideal diodes create novel ReLU activation functions
🔎 Similar Papers
No similar papers found.