FAILS: A Framework for Automated Collection and Analysis of LLM Service Incidents

πŸ“… 2025-03-15
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Despite widespread adoption of LLM services (e.g., ChatGPT, DALLΒ·E), they suffer from frequent failures, yet their failure modes remain poorly characterized, and no open-source tool supports reliability research grounded in real-world incident data. Method: We propose the first open-source failure analysis framework for LLM services, enabling automated collection, cleaning, temporal modeling, and multidimensional root-cause attribution of failure events. It uniquely integrates reliability engineering metrics (MTTR/MTBF quantification), LLM-augmented analysis (automated summarization, root-cause inference, pattern mining), and interactive visualization. Contribution/Results: The framework is publicly released and supports analysis across 17 failure dimensions. Empirical evaluation demonstrates its effectiveness in identifying cross-provider common failure trends and root causes, significantly enhancing observability and resilience of LLM-based systems.

Technology Category

Application Category

πŸ“ Abstract
Large Language Model (LLM) services such as ChatGPT, DALLE, and Cursor have quickly become essential for society, businesses, and individuals, empowering applications such as chatbots, image generation, and code assistance. The complexity of LLM systems makes them prone to failures and affects their reliability and availability, yet their failure patterns are not fully understood, making it an emerging problem. However, there are limited datasets and studies in this area, particularly lacking an open-access tool for analyzing LLM service failures based on incident reports. Addressing these problems, in this work we propose FAILS, the first open-sourced framework for incident reports collection and analysis on different LLM services and providers. FAILS provides comprehensive data collection, analysis, and visualization capabilities, including:(1) It can automatically collect, clean, and update incident data through its data scraper and processing components;(2) It provides 17 types of failure analysis, allowing users to explore temporal trends of incidents, analyze service reliability metrics, such as Mean Time to Recovery (MTTR) and Mean Time Between Failures (MTBF);(3) It leverages advanced LLM tools to assist in data analysis and interpretation, enabling users to gain observations and insights efficiently. All functions are integrated in the backend, allowing users to easily access them through a web-based frontend interface. FAILS supports researchers, engineers, and general users to understand failure patterns and further mitigate operational incidents and outages in LLM services. The framework is publicly available on https://github.com/atlarge-research/FAILS.
Problem

Research questions and friction points this paper is trying to address.

Understanding failure patterns in LLM services
Lack of open-access tools for incident analysis
Improving reliability and availability of LLM systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automated incident data collection and cleaning
Comprehensive failure analysis with 17 metrics
Web-based interface for easy user access
πŸ”Ž Similar Papers
No similar papers found.
S
S'andor Battaglini-Fischer
Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
N
Nishanthi Srinivasan
Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
B
B'alint L'aszl'o Szarvas
Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
X
Xiaoyu Chu
Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
Alexandru Iosup
Alexandru Iosup
Professor of Comp.Sci., VU University Amsterdam
Distributed SystemsPerformance EngineeringCloud ComputingBig DataComputer Ecosystems