🤖 AI Summary
Large language models (LLMs) exhibit weak generalization and poor interpretability in anomaly detection and out-of-distribution (OOD) sample identification. Method: This paper proposes the first two-dimensional classification framework centered on LLM functional roles—namely, *assistant* (generative aid) and *discriminator* (decision-oriented classifier)—departing from conventional statistical or supervised paradigms. It systematically integrates over one hundred LLM-adapted detection methods via synergistic techniques: prompt engineering, self-supervised representation analysis, uncertainty modeling, and multimodal alignment. Contribution/Results: The work distills four fundamental challenges and five scalable research directions, and establishes a dynamically updated, authoritative literature repository. Empirical outcomes demonstrate substantial improvements in methodological systematicity, interpretability, and cross-task transferability—providing both theoretical foundations and practical guidelines for trustworthy LLM deployment.
📝 Abstract
Detecting anomalies or out-of-distribution (OOD) samples is critical for maintaining the reliability and trustworthiness of machine learning systems. Recently, Large Language Models (LLMs) have demonstrated their effectiveness not only in natural language processing but also in broader applications due to their advanced comprehension and generative capabilities. The integration of LLMs into anomaly and OOD detection marks a significant shift from the traditional paradigm in the field. This survey focuses on the problem of anomaly and OOD detection under the context of LLMs. We propose a new taxonomy to categorize existing approaches into two classes based on the role played by LLMs. Following our proposed taxonomy, we further discuss the related work under each of the categories and finally discuss potential challenges and directions for future research in this field. We also provide an up-to-date reading list of relevant papers.