L-GTA: Latent Generative Modeling for Time Series Augmentation

📅 2025-07-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of simultaneously ensuring fidelity, controllability, and diversity in time-series data augmentation, this paper proposes a Transformer-enhanced Variational Recurrent Neural Network (T-VRNN). The model embeds interpretable geometric transformations—such as jittering and amplitude warping—into the latent space, enabling fine-grained, compositional generation. It preserves both statistical properties and dynamic structures of original sequences while supporting flexible control—from simple perturbations to complex pattern reconstruction. Extensive experiments on multiple real-world datasets demonstrate that the augmented data significantly improves forecasting accuracy (+3.2% on average), classification F1-score (+2.8%), and anomaly detection AUC (+4.1%). Moreover, the generated samples achieve superior performance over conventional hand-crafted transformations and state-of-the-art generative models under dynamic time warping (DTW) and mean squared error (MSE) metrics, validating the method’s effectiveness and generalizability.

Technology Category

Application Category

📝 Abstract
Data augmentation is gaining importance across various aspects of time series analysis, from forecasting to classification and anomaly detection tasks. We introduce the Latent Generative Transformer Augmentation (L-GTA) model, a generative approach using a transformer-based variational recurrent autoencoder. This model uses controlled transformations within the latent space of the model to generate new time series that preserve the intrinsic properties of the original dataset. L-GTA enables the application of diverse transformations, ranging from simple jittering to magnitude warping, and combining these basic transformations to generate more complex synthetic time series datasets. Our evaluation of several real-world datasets demonstrates the ability of L-GTA to produce more reliable, consistent, and controllable augmented data. This translates into significant improvements in predictive accuracy and similarity measures compared to direct transformation methods.
Problem

Research questions and friction points this paper is trying to address.

Generates synthetic time series preserving original data properties
Applies diverse transformations in latent space for augmentation
Improves predictive accuracy and similarity in time series analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transformer-based variational recurrent autoencoder
Controlled latent space transformations
Combines simple to complex synthetic augmentations
L
Luis Roque
LIACC/Faculty of Engineering, University of Porto, Porto, Portugal
C
Carlos Soares
LIACC/Faculty of Engineering, University of Porto, Porto, Portugal; Fraunhofer AICOS Portugal, Porto, Portugal
Vitor Cerqueira
Vitor Cerqueira
University of Porto, Faculty of Engineering
Machine learningTime series
Luís Torgo
Luís Torgo
Canada Research Chair, Professor, Faculty of Computer Science, Dalhousie University
Data ScienceData MiningMachine Learning