🤖 AI Summary
This work addresses the challenge of generating high-quality, diverse time series—both regularly and irregularly sampled. We propose a conditional score-based generative framework tailored for temporal modeling. Methodologically, we design a novel conditional denoising score-matching loss and a flexible score network that jointly encodes timestamps, sequence length, and class labels, enabling unified modeling of both regular and irregular sampling patterns; generation is performed end-to-end via diffusion. Our key contribution is the first adaptation of conditional score learning to non-uniform time series, explicitly modeling the heterogeneity of observation times. Extensive experiments on benchmark datasets—including PhysioNet and Electricity—demonstrate state-of-the-art performance in FID, TS-Divergence, and diversity metrics, significantly outperforming existing GAN-, VAE-, and diffusion-based baselines.
📝 Abstract
Score-based generative models (SGMs) have demonstrated unparalleled sampling quality and diversity in numerous fields, such as image generation, voice synthesis, and tabular data synthesis, etc. Inspired by those outstanding results, we apply SGMs to synthesize time-series by learning its conditional score function. To this end, we present a conditional score network for time-series synthesis, deriving a denoising score matching loss tailored for our purposes. In particular, our presented denoising score matching loss is the conditional denoising score matching loss for time-series synthesis. In addition, our framework is such flexible that both regular and irregular time-series can be synthesized with minimal changes to our model design. Finally, we obtain exceptional synthesis performance on various time-series datasets, achieving state-of-the-art sampling diversity and quality.