Facilitate Collaboration between Large Language Model and Task-specific Model for Time Series Anomaly Detection

πŸ“… 2025-01-10
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address representation domain mismatch and error accumulation in collaborative anomaly detection between large language models (LLMs) and lightweight time-series models, this paper proposes CoLLaTeβ€”a neuro-inspired framework that enables synergistic reasoning between an LLM (encoding domain knowledge) and a dedicated time-series model (learning normal patterns). CoLLaTe introduces a domain alignment module and a novel collaborative loss function, theoretically and empirically mitigating heterogeneity-induced representation inconsistency and predictive bias propagation. The framework integrates an LLM, a time-series neural network, a learnable alignment mapping, and a joint optimization training paradigm. Evaluated on multiple benchmark datasets, CoLLaTe achieves an average 12.3% improvement in F1-score over standalone LLM- or lightweight-model-based approaches, while enhancing robustness and interpretability.

Technology Category

Application Category

πŸ“ Abstract
In anomaly detection, methods based on large language models (LLMs) can incorporate expert knowledge, while task-specific smaller models excel at extracting normal patterns and detecting value fluctuations. Inspired by the human nervous system, where the brain stores expert knowledge and the peripheral nervous system and spinal cord handle specific tasks like withdrawal and knee-jerk reflexes, we propose CoLLaTe, a framework designed to facilitate collaboration between LLMs and task-specific models, leveraging the strengths of both. In this work, we first formulate the collaboration process and identify two key challenges in the collaboration between LLMs and task-specific models: (1) the misalignment between the expression domains of LLMs and smaller models, and (2) error accumulation arising from the predictions of both models. To address these challenges, we introduce two key components in CoLLaTe: the alignment module and the collaborative loss function. Through theoretical analysis and experimental validation, we demonstrate that these components effectively mitigate the identified challenges and achieve better performance than LLM based methods and task-specific smaller model.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Time Series Anomaly Detection
Intermodel Discrepancy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Collaborative Learning
Anomaly Detection
Large Language Models
πŸ”Ž Similar Papers
No similar papers found.