🤖 AI Summary
Existing anomaly detection methods are predominantly unimodal, struggling to jointly model heterogeneous data—such as time series, system logs, and tabular records—and exhibiting poor cross-domain generalization. To address this, we propose ICAD-LLM: the first unified framework that integrates large language models (LLMs) with in-context learning (ICL) for anomaly detection. ICAD-LLM employs multimodal encoding and constructs reference contexts from normal samples, enabling a single model to perform zero-shot or few-shot cross-domain adaptation across diverse data modalities. Its core contribution is the ICAD paradigm—a task-agnostic design that eliminates the need for domain-specific architectures, thereby substantially reducing deployment overhead. Extensive experiments demonstrate that ICAD-LLM matches state-of-the-art specialized methods on multiple benchmarks and achieves strong generalization on unseen domains, validating the feasibility of a single model for multimodal, multi-scenario anomaly detection.
📝 Abstract
Anomaly detection (AD) is a fundamental task of critical importance across numerous domains. Current systems increasingly operate in rapidly evolving environments that generate diverse yet interconnected data modalities -- such as time series, system logs, and tabular records -- as exemplified by modern IT systems. Effective AD methods in such environments must therefore possess two critical capabilities: (1) the ability to handle heterogeneous data formats within a unified framework, allowing the model to process and detect multiple modalities in a consistent manner during anomalous events; (2) a strong generalization ability to quickly adapt to new scenarios without extensive retraining. However, most existing methods fall short of these requirements, as they typically focus on single modalities and lack the flexibility to generalize across domains. To address this gap, we introduce a novel paradigm: In-Context Anomaly Detection (ICAD), where anomalies are defined by their dissimilarity to a relevant reference set of normal samples. Under this paradigm, we propose ICAD-LLM, a unified AD framework leveraging Large Language Models' in-context learning abilities to process heterogeneous data within a single model. Extensive experiments demonstrate that ICAD-LLM achieves competitive performance with task-specific AD methods and exhibits strong generalization to previously unseen tasks, which substantially reduces deployment costs and enables rapid adaptation to new environments. To the best of our knowledge, ICAD-LLM is the first model capable of handling anomaly detection tasks across diverse domains and modalities.