LogReasoner: Empowering LLMs with Expert-like Coarse-to-Fine Reasoning for Log Analysis Tasks

📅 2025-09-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current large language models (LLMs) lack expert-aligned, structured reasoning capabilities for log analysis, hindering the generation of fine-grained, interpretable diagnostic steps. To address this, we propose LogReasoner—a novel two-stage reasoning enhancement framework that bridges coarse-grained to fine-grained inference. First, it constructs a high-level reasoning skeleton grounded in expert-derived workflow diagrams; second, it refines low-level reasoning details via task-specific stepwise fine-tuning and preference learning. Implemented on open-source LLMs including Qwen-2.5 and Llama-3, LogReasoner supports four log analysis tasks: anomaly detection, root-cause diagnosis, failure localization, and remediation planning. Extensive evaluation demonstrates significant improvements in both accuracy and reasoning interpretability over state-of-the-art baselines. Our results validate that structured reasoning augmentation effectively enhances LLMs’ log analysis capabilities while maintaining strong generalizability across diverse tasks and model architectures.

Technology Category

Application Category

📝 Abstract
Log analysis is crucial for monitoring system health and diagnosing failures in complex systems. Recent advances in large language models (LLMs) offer new opportunities for automated log analysis, leveraging their reasoning capabilities to perform tasks such as anomaly detection and failure prediction. However, general-purpose LLMs struggle to formulate structured reasoning workflows that align with expert cognition and deliver precise details of reasoning steps. To address these challenges, we propose LogReasoner, a coarse-to-fine reasoning enhancement framework designed to enable LLMs to reason log analysis tasks like experts. LogReasoner consists of two stages: (1) coarse-grained enhancement of expert thinking, where high-level expert thoughts are constructed from collected troubleshooting flowcharts and existing tasks to enable LLMs to formulate structured reasoning workflows and (2) fine-grained enhancement of specific steps, where we first fine-tune the LLM with task-specific stepwise solutions to enhance the LLM for instantiated reasoning, then employ the preference learning to calibrate the LLM's reasoning details from its mistakes, further strengthen the LLM's analytical granularity and correctness. We evaluate LogReasoner on four distinct log analysis tasks using open-source LLMs such as Qwen-2.5 and Llama-3. Experimental results show that LogReasoner significantly outperforms existing LLMs, achieving state-of-the-art performance and demonstrating its effectiveness in enhancing the reasoning capabilities of LLMs for log analysis.
Problem

Research questions and friction points this paper is trying to address.

LLMs struggle with structured reasoning workflows for log analysis
General-purpose LLMs lack expert-like analytical granularity and correctness
Current models cannot align reasoning steps with expert cognition patterns
Innovation

Methods, ideas, or system contributions that make the work stand out.

Coarse-grained expert thinking enhancement for workflows
Fine-grained stepwise solutions via task-specific fine-tuning
Preference learning calibration to improve reasoning correctness
🔎 Similar Papers
No similar papers found.
Lipeng Ma
Lipeng Ma
Fudan University
Y
Yixuan Li
Shanghai Key Laboratory of Data Science, School of Computer Science, Fudan University, China
Weidong Yang
Weidong Yang
Professor of Computer Science
Big Data
M
Mingjie Zhou
Shanghai Key Laboratory of Data Science, School of Computer Science, Fudan University, China
Xinyi Liu
Xinyi Liu
Wuhan University
3D ReconstructionPoint Cloud and Image IntegrationComputational Origami
B
Ben Fei
Shanghai Key Laboratory of Data Science, School of Computer Science, Fudan University, China
Shuhao Li
Shuhao Li
Fudan University
Xiaoyan Sun
Xiaoyan Sun
Microsoft Research Asia
Image/Video CodingMultimedia ProcessingComputer Vision
Sihang Jiang
Sihang Jiang
Fudan University
Knowledge GraphLarge Language Models
Y
Yanghua Xiao
Shanghai Key Laboratory of Data Science, School of Computer Science, Fudan University, China