Log Parsing using LLMs with Self-Generated In-Context Learning and Self-Correction

📅 2024-06-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional log parsers suffer from poor generalization and inaccurate template inference due to dynamic log format evolution and scarcity of historical labeled data. Method: This paper proposes SG-ICL—the first Self-Generated In-Context Learning framework for log parsing—integrating dynamic demonstration retrieval, iterative template refinement, and context self-enhancement to construct an updatable candidate template set, enabling zero-shot adaptive parsing. It requires no fine-tuning or human annotation, relying solely on large language models (LLMs) and prompt engineering for high-precision template inference. Contribution/Results: SG-ICL achieves state-of-the-art performance across multiple large-scale public benchmarks. Under zero-shot settings, it delivers significant F1-score improvements over prior methods and demonstrates strong transferability and robustness across diverse LLM backbones.

Technology Category

Application Category

📝 Abstract
Log parsing transforms log messages into structured formats, serving as a crucial step for log analysis. Despite a variety of log parsers that have been proposed, their performance on evolving log data remains unsatisfactory due to reliance on human-crafted rules or learning-based models with limited training data. The recent emergence of large language models (LLMs) has demonstrated strong abilities in understanding natural language and code, making it promising to apply LLMs for log parsing. Consequently, several studies have proposed LLM-based log parsers. However, LLMs may produce inaccurate templates, and existing LLM-based log parsers directly use the template generated by the LLM as the parsing result, hindering the accuracy of log parsing. Furthermore, these log parsers depend heavily on historical log data as demonstrations, which poses challenges in maintaining accuracy when dealing with scarce historical log data or evolving log data. To address these challenges, we propose AdaParser, an effective and adaptive log parsing framework using LLMs with self-generated in-context learning (SG-ICL) and self-correction. To facilitate accurate log parsing, AdaParser incorporates a novel component, a template corrector, which utilizes the LLM to correct potential parsing errors in the templates it generates. In addition, AdaParser maintains a dynamic candidate set composed of previously generated templates as demonstrations to adapt evolving log data. Extensive experiments on public large-scale datasets indicate that AdaParser outperforms state-of-the-art methods across all metrics, even in zero-shot scenarios. Moreover, when integrated with different LLMs, AdaParser consistently enhances the performance of the utilized LLMs by a large margin.
Problem

Research questions and friction points this paper is trying to address.

Log Parsing
Variable Log Formats
Limited Previous Data
Innovation

Methods, ideas, or system contributions that make the work stand out.

AdaParser
Self-learning Correction
Dynamic Template Library
🔎 Similar Papers
No similar papers found.
Y
Yifan Wu
Peking University, China
Siyu Yu
Siyu Yu
Ph.D. student at Peking University
AIOpsLog analysis
Y
Ying Li
Peking University, China