Can Multimodal LLMs Perform Time Series Anomaly Detection?

📅 2025-02-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Prior work has not systematically investigated the applicability of multimodal large language models (MLLMs) to time-series anomaly detection (TSAD). Method: We introduce VisualTimeAnomaly, the first visual-centric benchmark for TSAD, covering univariate, multivariate, and irregular time-series scenarios. It features point-level, range-level, and variable-level anomaly annotations, along with a missing-data robustness evaluation protocol. We evaluate eight state-of-the-art MLLMs—including GPT-4o, Gemini-1.5, LLaVA-NeXT, and Qwen2-VL—by converting time-series data into images for input. Contribution/Results: MLLMs achieve significantly superior performance on range-level and variable-level anomaly detection compared to point-level detection; maintain high robustness under 25% missing data; and open-source models match closed-source ones in univariate TSAD. This work establishes a novel paradigm and provides a reproducible, task-specific benchmark for MLLM-based TSAD.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have been increasingly used in time series analysis. However, the potential of multimodal LLMs (MLLMs), particularly vision-language models, for time series remains largely under-explored. One natural way for humans to detect time series anomalies is through visualization and textual description. Motivated by this, we raise a critical and practical research question: Can multimodal LLMs perform time series anomaly detection? To answer this, we propose VisualTimeAnomaly benchmark to evaluate MLLMs in time series anomaly detection (TSAD). Our approach transforms time series numerical data into the image format and feed these images into various MLLMs, including proprietary models (GPT-4o and Gemini-1.5) and open-source models (LLaVA-NeXT and Qwen2-VL), each with one larger and one smaller variant. In total, VisualTimeAnomaly contains 12.4k time series images spanning 3 scenarios and 3 anomaly granularities with 9 anomaly types across 8 MLLMs. Starting with the univariate case (point- and range-wise anomalies), we extend our evaluation to more practical scenarios, including multivariate and irregular time series scenarios, and variate-wise anomalies. Our study reveals several key insights: 1) MLLMs detect range- and variate-wise anomalies more effectively than point-wise anomalies. 2) MLLMs are highly robust to irregular time series, even with 25% of the data missing. 3) Open-source MLLMs perform comparably to proprietary models in TSAD. While open-source MLLMs excel on univariate time series, proprietary MLLMs demonstrate superior effectiveness on multivariate time series. To the best of our knowledge, this is the first work to comprehensively investigate MLLMs for TSAD, particularly for multivariate and irregular time series scenarios. We release our dataset and code at https://github.com/mllm-ts/VisualTimeAnomaly to support future research.
Problem

Research questions and friction points this paper is trying to address.

Evaluate multimodal LLMs for time series anomaly detection
Transform time series data into image format for analysis
Assess MLLMs on multivariate and irregular time series scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transform time series to images
Evaluate MLLMs in TSAD
Compare proprietary and open-source models
🔎 Similar Papers
No similar papers found.