Linear Reservoir: A Diagonalization-Based Optimization

šŸ“… 2026-02-23
šŸ“ˆ Citations: 0
✨ Influential: 0
šŸ“„ PDF
šŸ¤– AI Summary
This work addresses the computational bottleneck of linear Echo State Networks (Linear ESNs), whose state updates incur O(N²) complexity due to dense matrix multiplication, limiting their applicability in large-scale or real-time settings. To overcome this, the authors reformulate reservoir dynamics in the eigenbasis of the recurrent weight matrix, reducing state propagation to element-wise operations and thereby eliminating costly matrix multiplications. Building on this insight, they propose three diagonalization-based optimization strategies: Eigenvalue Weight Tuning (EWT) to preserve original dynamics, End-to-End Training (EET) of readout weights, and Direct Parameter Generation (DPG) for synthesizing spectral parameters. This paradigm shift enables direct design in the eigenvalue domain, achieving O(N) per-step computational complexity while maintaining competitive prediction accuracy, thus offering an efficient and viable alternative to conventional Linear ESNs.

Technology Category

Application Category

šŸ“ Abstract
We introduce a diagonalization-based optimization for Linear Echo State Networks (ESNs) that reduces the per-step computational complexity of reservoir state updates from O(N^2) to O(N). By reformulating reservoir dynamics in the eigenbasis of the recurrent matrix, the recurrent update becomes a set of independent element-wise operations, eliminating the matrix multiplication. We further propose three methods to use our optimization depending on the situation: (i) Eigenbasis Weight Transformation (EWT), which preserves the dynamics of standard and trained Linear ESNs, (ii) End-to-End Eigenbasis Training (EET), which directly optimizes readout weights in the transformed space and (iii) Direct Parameter Generation (DPG), that bypasses matrix diagonalization by directly sampling eigenvalues and eigenvectors, achieving comparable performance than standard Linear ESNs. Across all experiments, both our methods preserve predictive accuracy while offering significant computational speedups, making them a replacement of standard Linear ESNs computations and training, and suggesting a shift of paradigm in linear ESN towards the direct selection of eigenvalues.
Problem

Research questions and friction points this paper is trying to address.

Linear Echo State Networks
computational complexity
reservoir computing
state updates
efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Linear Echo State Network
Diagonalization
Eigenbasis Optimization
Computational Complexity Reduction
Reservoir Computing
šŸ”Ž Similar Papers
No similar papers found.
R
Romain de Coudenhove
ENS PSL, Inria Center of Bordeaux University, LaBRI, IMN
Y
Yannis Bendi-Ouis
Inria Center of Bordeaux University, LaBRI, IMN
A
Anthony Strock
Department of Psychiatry & Behavioral Sciences, Stanford University School of Medicine
Xavier Hinaut
Xavier Hinaut
Inria, Bordeaux, France
Reservoir ComputingRecurrent Neural NetworksLanguage ProcessingBirdsongSensorimotor model