LLM-SrcLog: Towards Proactive and Unified Log Template Extraction via Large Language Models

📅 2025-12-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing log parsing methods are predominantly passive and log-centric, neglecting source code—thus hindering adaptability to system evolution—and incur prohibitive deployment costs when applying LLMs for per-log inference. This paper proposes the first source-code-aware, proactive log template extraction framework. It employs static code analysis—specifically cross-function variable identification and log-statement localization—to predict templates in a white-box manner prior to deployment. To ensure generalizability and accuracy, it synergistically integrates black-box clustering for logs lacking corresponding source code. Additionally, it leverages LLMs to generate templates, enhanced by rule-based post-processing. Evaluated on multiple benchmarks and industrial systems, our method achieves 2–35% higher F1 scores, reduces online parsing latency by ~1000× compared to per-log LLM inference—matching data-driven approaches—and is validated in real production environments.

Technology Category

Application Category

📝 Abstract
Log parsing transforms raw logs into structured templates containing constants and variables. It underpins anomaly detection, failure diagnosis, and other AIOps tasks. Current parsers are mostly reactive and log-centric. They only infer templates from logs, mostly overlooking the source code. This restricts their capacity to grasp dynamic log structures or adjust to evolving systems. Moreover, per-log LLM inference is too costly for practical deployment. In this paper, we propose LLM-SrcLog, a proactive and unified framework for log template parsing. It extracts templates directly from source code prior to deployment and supplements them with data-driven parsing for logs without available code. LLM-SrcLog integrates a cross-function static code analyzer to reconstruct meaningful logging contexts, an LLM-based white-box template extractor with post-processing to distinguish constants from variables, and a black-box template extractor that incorporates data-driven clustering for remaining unmatched logs. Experiments on two public benchmarks (Hadoop and Zookeeper) and a large-scale industrial system (Sunfire-Compute) show that, compared to two LLM-based baselines, LLM-SrcLog improves average F1-score by 2-17% and 8-35%. Meanwhile, its online parsing latency is comparable to data-driven methods and about 1,000 times faster than per-log LLM parsing. LLM-SrcLog achieves a near-ideal balance between speed and accuracy. Finally, we further validate the effectiveness of LLM-SrcLog through practical case studies in a real-world production environment.
Problem

Research questions and friction points this paper is trying to address.

Extracts log templates from source code proactively before deployment
Handles logs without available code using data-driven parsing methods
Balances parsing speed and accuracy for large-scale industrial systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Extracts log templates proactively from source code before deployment
Integrates static code analysis with LLM-based white-box template extraction
Combines data-driven clustering for logs without available source code
🔎 Similar Papers
No similar papers found.
Jiaqi Sun
Jiaqi Sun
Carnegie Mellon University
Causalitygraph representation learning
W
Wei Li
Alibaba Group, 969 West Wen Yi Road, Hangzhou 311121, China
H
Heng Zhang
Alibaba Group, 969 West Wen Yi Road, Hangzhou 311121, China
C
Chutong Ding
Shanghai Jiao Tong University, School of Computer Science, 800 Dongchuan RD, Shanghai 200240, China
Shiyou Qian
Shiyou Qian
Shanghai Jiao Tong University
Computer Science
J
Jian Cao
Shanghai Jiao Tong University, School of Computer Science, 800 Dongchuan RD, Shanghai 200240, China
Guangtao Xue
Guangtao Xue
Professor of Computer Science, Shanghai Jiao Tong University
Mobile ComputingSocial NetworksWireless Sensor NetworksDistributed Computing