A self-supervised learning approach for denoising autoregressive models with additive noise: finite and infinite variance cases

📅 2025-08-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing estimation and denoising methods for autoregressive time series corrupted by additive heavy-tailed noise—particularly α-stable noise with infinite variance—lack robustness and rely heavily on strong prior assumptions about the noise distribution. Method: We propose the first self-supervised denoising framework specifically designed for infinite-variance noise environments, requiring no assumptions about the noise distribution. Leveraging a reconstruction-based pretext task, the model implicitly learns the underlying clean signal structure. Training employs both synthetic and semi-synthetic data, enabling unified handling of diverse noise types—including Gaussian and α-stable—under severe impulsive interference. Contribution/Results: Our method effectively recovers clean time series and supports downstream forecasting even under extreme impulse noise. Extensive experiments demonstrate consistent and significant performance gains over state-of-the-art baselines across both finite- and infinite-variance noise settings, validating its exceptional robustness and generalization capability.

Technology Category

Application Category

📝 Abstract
The autoregressive time series model is a popular second-order stationary process, modeling a wide range of real phenomena. However, in applications, autoregressive signals are often corrupted by additive noise. Further, the autoregressive process and the corruptive noise may be highly impulsive, stemming from an infinite-variance distribution. The model estimation techniques that account for additional noise tend to show reduced efficacy when there is very strong noise present in the data, especially when the noise is heavy-tailed. Moreover, identification of a model corrupted with heavy-tailed, particularly infinite-variance noise, can be a very challenging task. In this paper, we propose a novel self-supervised learning method to denoise the additive noise-corrupted autoregressive model. Our approach is motivated by recent work in computer vision and does not require full knowledge of the noise distribution. We use the proposed method to recover exemplary finite- and infinite-variance autoregressive signals, namely, Gaussian- and alpha-stable distributed signals, respectively, from their noise-corrupted versions. The simulation study conducted on both synthetic and semi-synthetic data demonstrates the efficiency of our method compared to several baseline methods, particularly when the corruption is significant and impulsive in nature. Finally, we apply the presented methodology to forecast the pure autoregressive signal from the noise-corrupted data.
Problem

Research questions and friction points this paper is trying to address.

Denoising autoregressive models with additive noise
Handling highly impulsive noise with infinite variance
Recovering signals without full noise distribution knowledge
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-supervised learning for denoising autoregressive models
Handles finite and infinite variance noise cases
No prior knowledge of noise distribution required
🔎 Similar Papers
No similar papers found.
S
Sayantan Banerjee
Department of Mathematics, IIT Madras, Chennai, 600036, Tamil Nadu, India
Agnieszka Wylomanska
Agnieszka Wylomanska
Associate Professor, Faculty of Pure and Applied Mathematics, Wroclaw Tech.
stochastic processesheavy-tailed distributionsstatistical analysistime series modeling
S
S. Sundar
Department of Mathematics, IIT Madras, Chennai, 600036, Tamil Nadu, India