🤖 AI Summary
This work addresses the challenges posed by the high heterogeneity and unstructured nature of logs from leadership-class high-performance computing (HPC) systems, which hinder efficient information extraction and operational pattern discovery. The authors propose a domain-specific instruction-tuning framework for large language models that innovatively integrates HPC log templates with chain-of-thought reasoning to achieve high-fidelity log parsing and pattern mining. Built upon an 8B-parameter LLaMA model and employing a hybrid fine-tuning strategy tailored to HPC logs, the approach attains parsing accuracy on the LogHub benchmark comparable to that of LLaMA-70B and Claude. Applied to 600 million production logs from the Frontier supercomputer, it successfully uncovers critical associative patterns related to temporal dynamics, node anomalies, and workload errors. The method supports local deployment, ensuring both privacy preservation and computational efficiency.
📝 Abstract
Leadership-class HPC systems generate massive volumes of heterogeneous, largely unstructured system logs. Because these logs originate from diverse software, hardware, and runtime layers, they exhibit inconsistent formats, making structure extraction and pattern discovery extremely challenging. Therefore, robust log parsing and mining is critical to transform this raw telemetry into actionable insights that reveal operational patterns, diagnose anomalies, and enable reliable, efficient, and scalable system analysis. Recent advances in large language models (LLMs) offer a promising new direction for automated log understanding in leadership-class HPC environments.
To capitalize on this opportunity, we present a domain-adapted, instruction-following, LLM-driven framework that leverages chain-of-thought (CoT) reasoning to parse and structure HPC logs with high fidelity. Our approach combines domain-specific log-template data with instruction-tuned examples to fine-tune an 8B-parameter LLaMA model tailored for HPC log analysis. We develop a hybrid fine-tuning methodology that adapts a general-purpose LLM to domain-specific log data, enabling privacy-preserving, locally deployable, fast, and energy-efficient log-mining approach. We conduct experiments on a diverse set of log datasets from the LogHub repository. The evaluation confirms that our approach achieves parsing accuracy on par with significantly larger models, such as LLaMA 70B and Anthropic's Claude. We further validate the practical utility of our fine-tuned LLM model by parsing over 600 million production logs from the Frontier supercomputer over a four-week window, uncovering critical patterns in temporal dynamics, node-level anomalies, and workload-error log correlations.