Causality-Aware Contrastive Learning for Robust Multivariate Time-Series Anomaly Detection

📅 2025-06-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing multivariate time series anomaly detection (MTSAD) methods suffer from inadequate causal modeling and limited robustness to noise and non-stationarity. Method: This paper introduces the first causal-aware contrastive learning paradigm for MTSAD, integrating causal inference into contrastive learning. It discovers causal structures using the PC algorithm and a Granger-causality variant; generates positive samples via causal-preserving augmentations and negative samples via causal perturbations; and achieves causal-consistent separation of normal and anomalous patterns in the latent space. It further proposes a similarity-filtered one-class contrastive loss and a causal-structure-guided anomaly discrimination mechanism. Results: Evaluated on five real-world and two synthetic datasets, the method achieves an average 6.2% AUC improvement over state-of-the-art baselines and demonstrates significantly enhanced robustness to noise and non-stationary dynamics.

Technology Category

Application Category

📝 Abstract
Utilizing the complex inter-variable causal relationships within multivariate time-series provides a promising avenue toward more robust and reliable multivariate time-series anomaly detection (MTSAD) but remains an underexplored area of research. This paper proposes Causality-Aware contrastive learning for RObust multivariate Time-Series (CAROTS), a novel MTSAD pipeline that incorporates the notion of causality into contrastive learning. CAROTS employs two data augmentors to obtain causality-preserving and -disturbing samples that serve as a wide range of normal variations and synthetic anomalies, respectively. With causality-preserving and -disturbing samples as positives and negatives, CAROTS performs contrastive learning to train an encoder whose latent space separates normal and abnormal samples based on causality. Moreover, CAROTS introduces a similarity-filtered one-class contrastive loss that encourages the contrastive learning process to gradually incorporate more semantically diverse samples with common causal relationships. Extensive experiments on five real-world and two synthetic datasets validate that the integration of causal relationships endows CAROTS with improved MTSAD capabilities. The code is available at https://github.com/kimanki/CAROTS.
Problem

Research questions and friction points this paper is trying to address.

Incorporates causality into contrastive learning for anomaly detection
Uses causality-preserving and -disturbing samples for robust training
Improves anomaly detection by leveraging inter-variable causal relationships
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses causality-aware contrastive learning
Employs causality-preserving and -disturbing samples
Introduces similarity-filtered one-class contrastive loss
🔎 Similar Papers
No similar papers found.
H
Hyungi Kim
Department of Electrical and Computer Engineering, Seoul National University, Seoul, Republic of Korea
J
J. Mok
Department of Electrical and Computer Engineering, Seoul National University, Seoul, Republic of Korea
D
Dongjun Lee
Interdisciplinary Program in Artificial Intelligence, Seoul National University, Seoul, Republic of Korea
Jaihyun Lew
Jaihyun Lew
Seoul National University
Computer Vision
S
Sungjae Kim
Hyundai Motor Company, Gyeonggi-do, Republic of Korea
Sungroh Yoon
Sungroh Yoon
Professor, Electrical and Computer Engineering & Artificial Intelligence, Seoul National University
AIdeep learningmachine learningon-device AIbioinformatics