🤖 AI Summary
Existing public log anomaly detection datasets suffer from three major limitations: incomplete event coverage, artificial patterns introduced by static analysis, and weak semantic modeling. To address these, this paper proposes the first automated semantic log generation framework tailored for anomaly detection—capable of iteratively synthesizing log sequences with precise anomaly annotations without requiring actual system execution. The framework uniquely integrates enhanced program analysis with large language model (LLM)-driven Chain-of-Thought reasoning, enabling end-to-end semantic controllability across log synthesis, semantic modeling, and anomaly injection. Evaluated on Hadoop/HDFS, it improves event coverage by 38–95× over prior datasets. Logs generated by our framework boost the average F1-score of three state-of-the-art detection models by 1.8% (up to 3.7%). Furthermore, we establish a high-fidelity, scalable new log benchmark resource.
📝 Abstract
The scarcity of high-quality public log datasets has become a critical bottleneck in advancing log-based anomaly detection techniques. Current datasets exhibit three fundamental limitations: (1) incomplete event coverage, (2) artificial patterns introduced by static analysis-based generation frameworks, and (3) insufficient semantic awareness. To address these challenges, we present AnomalyGen, the first automated log synthesis framework specifically designed for anomaly detection. Our framework introduces a novel four-phase architecture that integrates enhanced program analysis with Chain-of-Thought reasoning (CoT reasoning), enabling iterative log generation and anomaly annotation without requiring physical system execution. Evaluations on Hadoop and HDFS distributed systems demonstrate that AnomalyGen achieves substantially broader log event coverage (38-95 times improvement over existing datasets) while producing more operationally realistic log sequences compared to static analysis-based approaches. When augmenting benchmark datasets with synthesized logs, we observe maximum F1-score improvements of 3.7% (average 1.8% improvement across three state-of-the-art anomaly detection models). This work not only establishes a high-quality benchmarking resource for automated log analysis but also pioneers a new paradigm for applying large language models (LLMs) in software engineering workflows.