๐ค AI Summary
This work proposes an equivariant latent space modeling approach to enhance both the quality and efficiency of time series generation. Addressing the challenge that existing generative models often fail to preserve structural consistency under transformations such as translation and amplitude scaling, the method introduces equivariant regularization within the low-dimensional latent space of a pretrained autoencoder. A tailored equivariant loss function is designed to explicitly incorporate geometric inductive biases into a flow-matchingโbased generative framework. Evaluated on multiple real-world time series datasets, the proposed approach substantially outperforms diffusion-based baselines in standard generation metrics while achieving sampling speeds several orders of magnitude faster, effectively balancing high-fidelity synthesis with computational efficiency.
๐ Abstract
Flow-based models have proven successful for time-series generation, particularly when defined in lower-dimensional latent spaces that enable efficient sampling. However, how to design latent representations with desirable equivariance properties for time-series generative modelling remains underexplored. In this work, we propose a latent flow-matching framework in which equivariance is explicitly encouraged through a simple regularisation of a pre-trained autoencoder. Specifically, we introduce an equivariance loss that enforces consistency between transformed signals and their reconstructions, and use it to fine-tune latent spaces with respect to basic time-series transformations such as translation and amplitude scaling. We show that these equivariance-regularised latent spaces improve generation quality while preserving the computational advantages of latent flow models. Experiments on multiple real-world datasets demonstrate that our approach consistently outperforms existing diffusion-based baselines in standard time-series generation metrics, while achieving orders-of-magnitude faster sampling. These results highlight the practical benefits of incorporating geometric inductive biases into latent generative models for time series.