🤖 AI Summary
Time-series anomaly detection suffers from weak cross-domain generalization and heavy reliance on supervised fine-tuning. Method: We propose FAE (Foundation Autoencoder), the first foundation model for zero-shot time-series anomaly detection, built upon a hybrid architecture integrating variational autoencoders (VAEs) and dilated convolutions, pretrained end-to-end in an unsupervised manner on large-scale time-series data to capture universal temporal dynamics. Contribution/Results: FAE pioneers the adaptation of the VAE paradigm into a time-series foundation model, enabling plug-and-play zero-shot detection without task-specific fine-tuning. Evaluated across diverse multi-domain, multivariate benchmarks—including KDD 2021 and real-world ISP operational data—FAE achieves significant improvements in anomaly localization accuracy and cross-dataset generalization robustness, establishing a novel paradigm for time-series foundation modeling.
📝 Abstract
We investigate a novel approach to time-series modeling, inspired by the successes of large pretrained foundation models. We introduce FAE (Foundation Auto-Encoders), a foundation generative-AI model for anomaly detection in time-series data, based on Variational Auto-Encoders (VAEs). By foundation, we mean a model pretrained on massive amounts of time-series data which can learn complex temporal patterns useful for accurate modeling, forecasting, and detection of anomalies on previously unseen datasets. FAE leverages VAEs and Dilated Convolutional Neural Networks (DCNNs) to build a generic model for univariate time-series modeling, which could eventually perform properly in out-of-the-box, zero-shot anomaly detection applications. We introduce the main concepts of FAE, and present preliminary results in different multi-dimensional time-series datasets from various domains, including a real dataset from an operational mobile ISP, and the well known KDD 2021 Anomaly Detection dataset.