🤖 AI Summary
This work addresses the modeling distortions and inefficiencies in irregular multivariate time series (IMTS) forecasting that arise from neglecting their sparse-event duality (SED). To this end, we propose a Spiking Transformer architecture explicitly designed to align with the SED characteristics of IMTS data. Our approach uniquely integrates spiking neural networks with the Transformer framework, introducing event-aligned Leaky Integrate-and-Fire (LIF) neurons, an event-preserving downsampling module, and a membrane potential–driven linear attention mechanism to enable efficient, event-synchronous modeling. Evaluated on multiple public IMTS benchmarks, the proposed model achieves state-of-the-art prediction accuracy while substantially reducing energy consumption and memory footprint.
📝 Abstract
Telemetry streams from large-scale Internet-connected systems (e.g., IoT deployments and online platforms) naturally form an irregular multivariate time series (IMTS) whose accurate forecasting is operationally vital. A closer examination reveals a defining Sparsity-Event Duality (SED) property of IMTS, i.e., long stretches with sparse or no observations are punctuated by short, dense bursts where most semantic events (observations) occur. However, existing Graph- and Transformer-based forecasters ignore SED: pre-alignment to uniform grids with heavy padding violates sparsity by inflating sequences and forcing computation at non-informative steps, while relational recasting weakens event semantics by disrupting local temporal continuity. These limitations motivate a more faithful and natural modeling paradigm for IMTS that aligns with its SED property. We find that Spiking Neural Networks meet this requirement, as they communicate via sparse binary spikes and update in an event-driven manner, aligning naturally with the SED nature of IMTS. Therefore, we present SEDformer, an SED-enhanced Spiking Transformer for telemetry IMTS forecasting that couples: (1) a SED-based Spike Encoder converts raw observations into event synchronous spikes using an Event-Aligned LIF neuron, (2) an Event-Preserving Temporal Downsampling module compresses long gaps while retaining salient firings and (3) a stack of SED-based Spike Transformer blocks enable intra-series dependency modeling with a membrane-based linear attention driven by EA-LIF spiking features. Experiments on public telemetry IMTS datasets show that SEDformer attains state-of-the-art forecasting accuracy while reducing energy and memory usage, providing a natural and efficient path for modeling IMTS.