🤖 AI Summary
This work addresses the challenge of simultaneously achieving strong inductive biases, training efficiency, and scalability in time-series modeling. We propose an efficient continuous-time reservoir framework that integrates random features with controlled differential equations (CDEs), mapping input paths to implicit continuous representations and requiring only linear readout-layer training. We introduce two novel architectures—Random Fourier CDE and Random Rough DE—and theoretically establish that their infinite-width limits converge to the RBF-lifted signature kernel and the rough signature kernel, respectively—thereby unifying random reservoir computing, continuous-depth models, and path signature theory for the first time. Leveraging random Fourier features, log-ODE discretization, and log-signature representations, our method achieves state-of-the-art or competitive performance across multiple time-series benchmarks, significantly outperforming explicit signature computation methods while preserving strong path-aware inductive biases, high training efficiency, and excellent scalability.
📝 Abstract
We introduce a training-efficient framework for time-series learning that combines random features with controlled differential equations (CDEs). In this approach, large randomly parameterized CDEs act as continuous-time reservoirs, mapping input paths to rich representations. Only a linear readout layer is trained, resulting in fast, scalable models with strong inductive bias. Building on this foundation, we propose two variants: (i) Random Fourier CDEs (RF-CDEs): these lift the input signal using random Fourier features prior to the dynamics, providing a kernel-free approximation of RBF-enhanced sequence models; (ii) Random Rough DEs (R-RDEs): these operate directly on rough-path inputs via a log-ODE discretization, using log-signatures to capture higher-order temporal interactions while remaining stable and efficient. We prove that in the infinite-width limit, these model induces the RBF-lifted signature kernel and the rough signature kernel, respectively, offering a unified perspective on random-feature reservoirs, continuous-time deep architectures, and path-signature theory.
We evaluate both models across a range of time-series benchmarks, demonstrating competitive or state-of-the-art performance. These methods provide a practical alternative to explicit signature computations, retaining their inductive bias while benefiting from the efficiency of random features.