🤖 AI Summary
Structured State Space Models (SSMs) lack dynamic parameter adaptation during inference, limiting their effectiveness on non-stationary time-series data such as real-time carbon emissions. Method: We propose an online fine-tuning framework based on real-time recurrent learning—the first to integrate recurrent learning into SSMs—enabling lightweight, continuous gradient updates during inference without retraining or model reloading. A linear recurrent unit is employed to construct an efficient state-space model tailored for small-scale, streaming data from embedded vehicular hardware. Contribution/Results: Experiments demonstrate that our method significantly reduces long-horizon prediction error under resource constraints, achieving an average 18.7% reduction in MAE. It exhibits strong online adaptability and deployment feasibility on edge devices, establishing a novel paradigm for low-overhead continual learning in dynamic environments.
📝 Abstract
This paper introduces a new approach for fine-tuning the predictions of structured state space models (SSMs) at inference time using real-time recurrent learning. While SSMs are known for their efficiency and long-range modeling capabilities, they are typically trained offline and remain static during deployment. Our method enables online adaptation by continuously updating model parameters in response to incoming data. We evaluate our approach for linear-recurrent-unit SSMs using a small carbon emission dataset collected from embedded automotive hardware. Experimental results show that our method consistently reduces prediction error online during inference, demonstrating its potential for dynamic, resource-constrained environments.