LLM meets ML: Data-efficient Anomaly Detection on Unseen Unstable Logs

πŸ“… 2024-06-11
πŸ“ˆ Citations: 7
✨ Influential: 1
πŸ“„ PDF
πŸ€– AI Summary
To address the instability of log data caused by software/environment changes and the degradation of unsupervised log anomaly detection (ULAD) performance under scarce labeling, this paper proposes FlexLogβ€”a lightweight hybrid approach. Methodologically, FlexLog innovatively integrates traditional machine learning models (decision trees, KNN, and feedforward neural networks) with the Mistral-7B large language model within a unified ULAD framework, operating under unsupervised or weakly supervised settings. It further incorporates a caching mechanism and retrieval-augmented generation (RAG) to enhance generalization and data efficiency. Experimental evaluation across four standard ULAD benchmarks demonstrates that FlexLog achieves 1.2–13.0 percentage-point improvements in F1-score while reducing annotation requirements by 62.87% compared to fully supervised baselines. Moreover, it maintains sub-second inference latency per log sequence (<1 s), confirming its practicality for real-time deployment.

Technology Category

Application Category

πŸ“ Abstract
Most log-based anomaly detectors assume logs are stable, though logs are often unstable due to software or environmental changes. Anomaly detection on unstable logs (ULAD) is therefore a more realistic, yet under-investigated challenge. Current approaches predominantly employ machine learning (ML) models, which often require extensive labeled data for training. To mitigate data insufficiency, we propose FlexLog, a novel hybrid approach for ULAD that combines ML models -- decision tree, k-nearest neighbors, and a feedforward neural network -- with a Large Language Model (Mistral) through ensemble learning. FlexLog also incorporates a cache and retrieval-augmented generation (RAG) to further enhance efficiency and effectiveness. To evaluate FlexLog, we configured four datasets for ULAD, namely ADFA-U, LOGEVOL-U, SynHDFS-U, and SYNEVOL-U. FlexLog outperforms all baselines by at least 1.2 percentage points in F1 score while using 62.87 percentage points less labeled data. When trained on the same amount of data as the baselines, FlexLog achieves up to a 13 percentage points increase in F1 score on ADFA-U across varying training dataset sizes. Additionally, FlexLog maintains inference time under one second per log sequence, making it suitable for most applications except latency-sensitive systems. Further analysis reveals the positive impact of FlexLog's key components: cache, RAG and ensemble learning.
Problem

Research questions and friction points this paper is trying to address.

Detecting anomalies in unstable logs with limited labeled data
Combining ML models and LLMs for efficient anomaly detection
Improving accuracy and reducing data dependency in log analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid ML and LLM for unstable log anomaly detection
Ensemble learning with cache and RAG enhancement
Reduces labeled data need by 62.87 percentage points
πŸ”Ž Similar Papers
No similar papers found.
F
Fateme Hadadi
University of Ottawa, Ottawa, Canada
Qinghua Xu
Qinghua Xu
Lero Research Centre
Cyber-physical SystemsTestingLarge Language ModelDigital Twin
D
D. Bianculli
University of Luxembourg, Luxembourg, Luxembourg
L
Lionel C. Briand
University of Ottawa, Ottawa, Canada