🤖 AI Summary
Traditional Alternator models suffer from weak noise modeling capability and insufficient flexibility in dynamic representation for time-series modeling. To address this, we propose Alternator++, the first framework to integrate the noise-generation mechanism of diffusion models into the Alternator architecture, explicitly modeling the stochastic process between latent variables and observed trajectories. Methodologically, we design a dual noise-matching loss that jointly optimizes trajectory generation and noise reconstruction, and combine it with the original Alternator reconstruction loss for end-to-end training. Experiments demonstrate that Alternator++ achieves significant improvements over strong baselines—including Mamba, ScoreGrad, and Dyffusion—across three core tasks: density estimation, time-series imputation, and forecasting. These results validate both the effectiveness and generality of explicit noise modeling in enhancing temporal representation learning.
📝 Abstract
Alternators have recently been introduced as a framework for modeling time-dependent data. They often outperform other popular frameworks, such as state-space models and diffusion models, on challenging time-series tasks. This paper introduces a new Alternator model, called Alternator++, which enhances the flexibility of traditional Alternators by explicitly modeling the noise terms used to sample the latent and observed trajectories, drawing on the idea of noise models from the diffusion modeling literature. Alternator++ optimizes the sum of the Alternator loss and a noise-matching loss. The latter forces the noise trajectories generated by the two noise models to approximate the noise trajectories that produce the observed and latent trajectories. We demonstrate the effectiveness of Alternator++ in tasks such as density estimation, time series imputation, and forecasting, showing that it outperforms several strong baselines, including Mambas, ScoreGrad, and Dyffusion.