🤖 AI Summary
Conventional physical neural networks rely on external optimization algorithms, hindering autonomous learning and limiting hardware efficiency.
Method: We propose a self-learning physical architecture wherein learning rules are intrinsically encoded into the system’s Hamiltonian—enabling physically embodied learning. Specifically, we design a long-short-term memory coupling mechanism: coherent Ising machines (CIMs) store long-term synaptic weights, while spin-field auxiliary fields encode short-term activation memory; these components co-evolve dynamically. Furthermore, we employ a multimodal parametrically driven resonator network with nonlinear field–field coupling and self-organizing learning dynamics.
Results: Numerical simulations demonstrate that the system achieves pattern recognition and response generalization solely from input examples—without gradient-based updates or explicit weight adjustments. This work establishes, for the first time, Hamiltonian-level physical instantiation of learning rules and end-to-end autonomous learning in analog physical systems.
📝 Abstract
Physical information processors can learn from examples if they are modified according to an abstract parameter update equation, termed a learning rule. We introduce a physical model for self-learning that encodes the learning rule in the Hamiltonian of the system. The model consists of a network of multi-modal resonators. One of the modes is driven parametrically into a bi-stable regime, forming a coherent Ising machine (CIM) -- that provides the long-term memory that stores learned responses (weights). The CIM is augmented with an additional spinor field that acts as short-term (activation) memory. We numerically demonstrate that, in the presence of suitable nonlinear interactions between the long-term memory Ising machine and the short-term memory auxiliary field, the system autonomously learns from examples.