🤖 AI Summary
This paper addresses the theoretical capability limits of neural oscillators—comprising coupled second-order ordinary differential equations (ODEs) and multilayer perceptrons (MLPs)—in approximating causal continuous operators and uniformly asymptotically incrementally stable second-order dynamical systems.
Method: We integrate second-order ODE modeling, MLP representational analysis, causal operator theory, and incremental stability theory to derive rigorous upper bounds on approximation error within the space of uniformly continuous functions.
Contribution/Results: We establish the first polynomial decay rate of the upper approximation error—scaling inversely with the widths of the two MLPs—thereby alleviating parameter-complexity bottlenecks. The framework unifies causal operator approximation and dynamical system modeling, and naturally extends to linear continuous-time state-space models. Two numerical experiments empirically validate the predicted error decay rate. This work provides the first theoretically grounded, quantitatively characterized approximation guarantee for neural oscillators in long-sequence modeling.
📝 Abstract
Neural oscillators, originating from the second-order ordinary differential equations (ODEs), have demonstrated competitive performance in stably learning causal mappings between long-term sequences or continuous temporal functions. However, theoretically quantifying the capacities of their neural network architectures remains a significant challenge. In this study, the neural oscillator consisting of a second-order ODE followed by a multilayer perceptron (MLP) is considered. Its upper approximation bound for approximating causal and uniformly continuous operators between continuous temporal function spaces and that for approximating uniformly asymptotically incrementally stable second-order dynamical systems are derived. The established proof method of the approximation bound for approximating the causal continuous operators can also be directly applied to state-space models consisting of a linear time-continuous complex recurrent neural network followed by an MLP. Theoretical results reveal that the approximation error of the neural oscillator for approximating the second-order dynamical systems scales polynomially with the reciprocals of the widths of two utilized MLPs, thus mitigating the curse of parametric complexity. The decay rates of two established approximation error bounds are validated through two numerical cases. These results provide a robust theoretical foundation for the effective application of the neural oscillator in science and engineering.