🤖 AI Summary
Deep state-space models (SSMs) face challenges in modeling long-range dependencies, lack physical interpretability, and incur high computational costs for gradient computation. Method: This paper proposes the RHEL algorithm, which exploits time-reversal symmetry breaking in Hamiltonian systems to enable exact, Jacobian-free, sampling-free gradient estimation via a three-step forward pass—achieving zero variance and zero additional memory overhead. It integrates physical conservation laws directly into gradient computation and introduces the Hamiltonian Recurrent Unit (HRU) alongside a hierarchical Hamiltonian SSM (HSSM) architecture, seamlessly unifying continuous adjoint methods with discrete backpropagation through time (BPTT). Contribution/Results: On diverse long-horizon time-series tasks spanning up to 50k steps, RHEL-trained HSSMs match BPTT’s performance while offering superior scalability, energy efficiency, and strict physical realizability.
📝 Abstract
Deep State Space Models (SSMs) reignite physics-grounded compute paradigms, as RNNs could natively be embodied into dynamical systems. This calls for dedicated learning algorithms obeying to core physical principles, with efficient techniques to simulate these systems and guide their design. We propose Recurrent Hamiltonian Echo Learning (RHEL), an algorithm which provably computes loss gradients as finite differences of physical trajectories of non-dissipative, Hamiltonian systems. In ML terms, RHEL only requires three"forward passes"irrespective of model size, without explicit Jacobian computation, nor incurring any variance in the gradient estimation. Motivated by the physical realization of our algorithm, we first introduce RHEL in continuous time and demonstrate its formal equivalence with the continuous adjoint state method. To facilitate the simulation of Hamiltonian systems trained by RHEL, we propose a discrete-time version of RHEL which is equivalent to Backpropagation Through Time (BPTT) when applied to a class of recurrent modules which we call Hamiltonian Recurrent Units (HRUs). This setting allows us to demonstrate the scalability of RHEL by generalizing these results to hierarchies of HRUs, which we call Hamiltonian SSMs (HSSMs). We apply RHEL to train HSSMs with linear and nonlinear dynamics on a variety of time-series tasks ranging from mid-range to long-range classification and regression with sequence length reaching $sim 50k$. We show that RHEL consistently matches the performance of BPTT across all models and tasks. This work opens new doors for the design of scalable, energy-efficient physical systems endowed with self-learning capabilities for sequence modelling.