Leveraging LLMs for Structured Information Extraction and Analysis from Cloud Incident Reports (Work In Progress Paper)

📅 2026-03-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Cloud service incident reports are often lengthy and unstructured, impeding reliability analysis and improvement efforts. This study systematically evaluates the performance of six large language models—including GPT-3.5 and Gemini 2.0—combined with six prompting strategies, such as few-shot learning, on structured information extraction from over 3,000 real-world cloud incident reports. Results show that large models achieve 75%–95% accuracy in metadata extraction tasks; while few-shot prompting enhances precision for most fields, it incurs higher input overhead. Lightweight models demonstrate a more favorable trade-off among accuracy, latency, and cost. The findings highlight critical design considerations in prompt engineering and model selection for practical deployment, offering actionable guidance for cloud reliability engineering.

Technology Category

Application Category

📝 Abstract
Incident management is essential to maintain the reliability and availability of cloud computing services. Cloud vendors typically disclose incident reports to the public, summarizing the failures and recovery process to help minimize their impact. However, such reports are often lengthy and unstructured, making them difficult to understand, analyze, and use for long-term dependability improvements. The emergence of LLMs offers new opportunities to address this challenge, but how to achieve this is currently understudied. In this paper, we explore the use of cutting-edge LLMs to extract key information from unstructured cloud incident reports. First, we collect more than 3,000 incident reports from 3 leading cloud service providers (AWS, AZURE, and GCP), and manually annotate these collected samples. Then, we design and compare 6 prompt strategies to extract and classify different types of information. We consider 6~LLM models, including 3 lightweight and 3 state-of-the-art (SotA), and evaluate model accuracy, latency, and token cost across datasets, models, prompts, and extracted fields. Our study has uncovered the following key findings: (1) LLMs achieve high metadata extraction accuracy, $75\%\text{--}95\%$ depending on the dataset. (2) Few-shot prompting generally improves accuracy for meta-data fields except for classification, and has better (lower) latency due to shorter output-tokens but requires $1.5\text{--}2\times$ more input-tokens. (3) Lightweight models (e.g., Gemini~2.0, GPT~3.5) offer favorable trade-offs in accuracy, cost, and latency; SotA models yield higher accuracy at significantly greater cost and latency. Our study provides tools, methodologies, and insights for leveraging LLMs to accurately and efficiently extract incident-report information. The FAIR data and code are publicly available at https://github.com/atlarge-research/llm-cloud-incident-extraction.
Problem

Research questions and friction points this paper is trying to address.

cloud incident reports
structured information extraction
unstructured data
incident management
dependability
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM
incident report
information extraction
prompt engineering
cloud reliability
🔎 Similar Papers
No similar papers found.
X
Xiaoyu Chu
Vrije Universiteit Amsterdam, The Netherlands
Shashikant Ilager
Shashikant Ilager
University of Amsterdam
Energy efficient computingdistributed systemscloud computingedge computingML for systems
Y
Yizhen Zang
Delft University of Technology, The Netherlands
S
Sacheendra Talluri
Vrije Universiteit Amsterdam, The Netherlands
Alexandru Iosup
Alexandru Iosup
Professor of Comp.Sci., VU University Amsterdam
Distributed SystemsPerformance EngineeringCloud ComputingBig DataComputer Ecosystems