Optimizing the Training Diet: Data Mixture Search for Robust Time Series Forecasting

📅 2025-12-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limitations of full-dataset training—namely, high redundancy and class imbalance in sensor time-series data—this paper introduces the “Training Diet” paradigm, which formulates data selection as an optimization problem over cluster-level mixture proportions. Methodologically, it first performs unsupervised behavioral clustering via a large-scale time-series encoder coupled with k-means; then, within the cluster-proportion space, it jointly applies Optuna-based Bayesian optimization and lightweight model fine-tuning to end-to-end identify the optimal training subset. Its key contribution lies in treating dataset composition as a learnable variable—challenging the implicit “more data is better” assumption—and enabling data-centric performance gains. On the PMSM dataset, the approach reduces MSE from 1.70 to 1.37 (a 19.41% relative improvement), significantly outperforming the full-dataset training baseline.

Technology Category

Application Category

📝 Abstract
The standard paradigm for training deep learning models on sensor data assumes that more data is always better. However, raw sensor streams are often imbalanced and contain significant redundancy, meaning that not all data points contribute equally to model generalization. In this paper, we show that, in some cases, "less is more" when considering datasets. We do this by reframing the data selection problem: rather than tuning model hyperparameters, we fix the model and optimize the composition of the training data itself. We introduce a framework for discovering the optimal "training diet" from a large, unlabeled time series corpus. Our framework first uses a large-scale encoder and k-means clustering to partition the dataset into distinct, behaviorally consistent clusters. These clusters represent the fundamental 'ingredients' available for training. We then employ the Optuna optimization framework to search the high-dimensional space of possible data mixtures. For each trial, Optuna proposes a specific sampling ratio for each cluster, and a new training set is constructed based on this recipe. A smaller target model is then trained and evaluated. Our experiments reveal that this data-centric search consistently discovers data mixtures that yield models with significantly higher performance compared to baselines trained on the entire dataset. Specifically - evaluated on PMSM dataset - our method improved performance from a baseline MSE of 1.70 to 1.37, a 19.41% improvement.
Problem

Research questions and friction points this paper is trying to address.

Optimizing training data composition for time series forecasting
Searching for optimal data mixtures to improve model generalization
Using data-centric optimization to enhance forecasting model performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Optimizes training data composition instead of model hyperparameters.
Clusters time series data into behavioral ingredients using encoder.
Searches optimal data mixtures with Optuna framework for performance.
🔎 Similar Papers
No similar papers found.