ICAD-LLM: One-for-All Anomaly Detection via In-Context Learning with Large Language Models

📅 2025-12-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing anomaly detection methods are predominantly unimodal, struggling to jointly model heterogeneous data—such as time series, system logs, and tabular records—and exhibiting poor cross-domain generalization. To address this, we propose ICAD-LLM: the first unified framework that integrates large language models (LLMs) with in-context learning (ICL) for anomaly detection. ICAD-LLM employs multimodal encoding and constructs reference contexts from normal samples, enabling a single model to perform zero-shot or few-shot cross-domain adaptation across diverse data modalities. Its core contribution is the ICAD paradigm—a task-agnostic design that eliminates the need for domain-specific architectures, thereby substantially reducing deployment overhead. Extensive experiments demonstrate that ICAD-LLM matches state-of-the-art specialized methods on multiple benchmarks and achieves strong generalization on unseen domains, validating the feasibility of a single model for multimodal, multi-scenario anomaly detection.

Technology Category

Application Category

📝 Abstract
Anomaly detection (AD) is a fundamental task of critical importance across numerous domains. Current systems increasingly operate in rapidly evolving environments that generate diverse yet interconnected data modalities -- such as time series, system logs, and tabular records -- as exemplified by modern IT systems. Effective AD methods in such environments must therefore possess two critical capabilities: (1) the ability to handle heterogeneous data formats within a unified framework, allowing the model to process and detect multiple modalities in a consistent manner during anomalous events; (2) a strong generalization ability to quickly adapt to new scenarios without extensive retraining. However, most existing methods fall short of these requirements, as they typically focus on single modalities and lack the flexibility to generalize across domains. To address this gap, we introduce a novel paradigm: In-Context Anomaly Detection (ICAD), where anomalies are defined by their dissimilarity to a relevant reference set of normal samples. Under this paradigm, we propose ICAD-LLM, a unified AD framework leveraging Large Language Models' in-context learning abilities to process heterogeneous data within a single model. Extensive experiments demonstrate that ICAD-LLM achieves competitive performance with task-specific AD methods and exhibits strong generalization to previously unseen tasks, which substantially reduces deployment costs and enables rapid adaptation to new environments. To the best of our knowledge, ICAD-LLM is the first model capable of handling anomaly detection tasks across diverse domains and modalities.
Problem

Research questions and friction points this paper is trying to address.

Unified anomaly detection across diverse data modalities like time series and logs
Generalizing to new scenarios without extensive retraining for rapid adaptation
Overcoming limitations of single-modality methods lacking cross-domain flexibility
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified anomaly detection framework using in-context learning
Leverages large language models for heterogeneous data processing
Enables strong generalization to unseen tasks without retraining
🔎 Similar Papers
No similar papers found.
Z
Zhongyuan Wu
School of Computer Science and Engineering, Beihang University, Beijing, China
J
Jingyuan Wang
School of Computer Science and Engineering, Beihang University, Beijing, China
Z
Zexuan Cheng
School of Computer Science and Engineering, Beihang University, Beijing, China
Y
Yilong Zhou
School of Computer Science and Engineering, Beihang University, Beijing, China
Weizhi Wang
Weizhi Wang
Ph.D. Candidate, University of California Santa Barbara
Natural Language Processing
J
Juhua Pu
School of Computer Science and Engineering, Beihang University, Beijing, China
C
Chao Li
School of Computer Science and Engineering, Beihang University, Beijing, China
C
Changqing Ma
Capinfo Co., Ltd.