🤖 AI Summary
Addressing the challenges of industrial time-series data—namely sparsity, irregular sampling, and the indirect availability of Remaining Useful Life (RUL) labels solely via failure events—this paper proposes an interpolation-free, end-to-end RUL prediction framework. Methodologically: (1) a parametric static regression model is designed to capture implicit temporal dependencies from single-point inputs; (2) a parametric rectification (PR) mechanism, coupled with posterior-estimate-driven function approximation, enhances regression consistency under indirect supervision; (3) a novel batch-training strategy tailored for indirect supervision is introduced to mitigate overfitting and accelerate convergence. Experiments on public sparse RUL benchmarks demonstrate that our approach significantly outperforms interpolation-dependent time-series models, exhibiting superior robustness—especially under extreme sparsity—and achieving state-of-the-art prediction accuracy.
📝 Abstract
Supervised time series prediction relies on directly measured target variables, but real-world use cases such as predicting remaining useful life (RUL) involve indirect supervision, where the target variable is labeled as a function of another dependent variable. Trending temporal regression techniques rely on sequential time series inputs to capture temporal patterns, requiring interpolation when dealing with sparsely and irregularly sampled covariates along the timeline. However, interpolation can introduce significant biases, particularly with highly scarce data. In this paper, we address the RUL prediction problem with data scarcity as time series regression under indirect supervision. We introduce a unified framework called parameterized static regression, which takes single data points as inputs for regression of target values, inherently handling data scarcity without requiring interpolation. The time dependency under indirect supervision is captured via a parametrical rectification (PR) process, approximating a parametric function during inference with historical posteriori estimates, following the same underlying distribution used for labeling during training. Additionally, we propose a novel batch training technique for tasks in indirect supervision to prevent overfitting and enhance efficiency. We evaluate our model on public benchmarks for RUL prediction with simulated data scarcity. Our method demonstrates competitive performance in prediction accuracy when dealing with highly scarce time series data.