Time Series Generation Under Data Scarcity: A Unified Generative Modeling Approach

📅 2025-05-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Addressing the critical challenge of few-shot time-series generation, this paper proposes the first unified diffusion-based generative framework tailored for cross-domain few-shot scenarios. Methodologically, it adopts a large-scale heterogeneous time-series pretraining paradigm and innovatively introduces dynamic convolutional layers coupled with dataset-token conditioning to enable robust domain adaptation. The framework achieves high-fidelity time-series synthesis using only a minimal number of target-domain samples (e.g., 1–5). Extensive experiments demonstrate that it significantly outperforms domain-specific baselines across diverse few-shot settings. Remarkably, it also attains state-of-the-art performance on full-data benchmarks, validating its exceptional generalization capability and scalability. This work establishes a novel paradigm for low-resource time-series modeling, bridging the gap between pretraining efficacy and practical few-shot adaptability in temporal data generation.

Technology Category

Application Category

📝 Abstract
Generative modeling of time series is a central challenge in time series analysis, particularly under data-scarce conditions. Despite recent advances in generative modeling, a comprehensive understanding of how state-of-the-art generative models perform under limited supervision remains lacking. In this work, we conduct the first large-scale study evaluating leading generative models in data-scarce settings, revealing a substantial performance gap between full-data and data-scarce regimes. To close this gap, we propose a unified diffusion-based generative framework that can synthesize high-fidelity time series across diverse domains using just a few examples. Our model is pre-trained on a large, heterogeneous collection of time series datasets, enabling it to learn generalizable temporal representations. It further incorporates architectural innovations such as dynamic convolutional layers for flexible channel adaptation and dataset token conditioning for domain-aware generation. Without requiring abundant supervision, our unified model achieves state-of-the-art performance in few-shot settings-outperforming domain-specific baselines across a wide range of subset sizes. Remarkably, it also surpasses all baselines even when tested on full datasets benchmarks, highlighting the strength of pre-training and cross-domain generalization. We hope this work encourages the community to revisit few-shot generative modeling as a key problem in time series research and pursue unified solutions that scale efficiently across domains. Code is available at https://github.com/azencot-group/ImagenFew.
Problem

Research questions and friction points this paper is trying to address.

Evaluating generative models in data-scarce time series settings
Proposing a unified diffusion framework for few-shot generation
Enhancing cross-domain generalization with pre-trained temporal representations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified diffusion-based generative framework
Dynamic convolutional layers adaptation
Dataset token conditioning generation
🔎 Similar Papers
No similar papers found.