Beyond Microservices: Testing Web-Scale RCA Methods on GPU-Driven LLM Workloads

📅 2026-03-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of effectively localizing faults in large language model (LLM) inference systems, where the complexity of the software-hardware stack undermines existing root cause analysis (RCA) methods. For the first time, it systematically evaluates 24 RCA approaches—including 20 metric-based, 2 trace-based, and 2 multi-source methods—under GPU-accelerated LLM workloads through controlled fault injection, comprehensive metric monitoring, distributed tracing, and multi-source data fusion. The empirical results reveal that multi-source methods achieve the highest accuracy, while the effectiveness of metric-based techniques varies significantly with fault type, and trace-based methods generally fail to identify root causes. These findings demonstrate that current RCA tools do not generalize well to LLM scenarios, prompting the authors to propose enhanced observability practices and tailored analytical guidelines specifically designed for LLM systems.

Technology Category

Application Category

📝 Abstract
Large language model (LLM) services have become an integral part of search, assistance, and decision-making applications. However, unlike traditional web or microservices, the hardware and software stack enabling LLM inference deployment is of higher complexity and far less field-tested, making it more susceptible to failures that are difficult to resolve. Keeping outage costs and quality of service degradations in check depends on shortening mean time to repair, which in practice is gated by how quickly the fault is identified, located, and diagnosed. Automated root cause analysis (RCA) accelerates failure localization by identifying the system component that failed and tracing how the failure propagated. Numerous RCA methods have been developed for traditional services, using request path tracing, resource metric and log data analysis. Yet, existing RCA methods have not been designed for LLM deployments that present distinct runtime characteristics. In this study, we evaluate the effectiveness of RCA methods on a best-practice LLM inference deployment under controlled failure injections. Across 24 methods (20 metric-based, two trace-based, and two multi-source), we find that multi-source approaches achieve the highest accuracy, metric-based methods show fault-type-dependent performance, and trace-based methods largely fail. These results reveal that existing RCA tools do not generalize to LLM systems, motivating tailored analysis techniques and enhanced observability, for which we formulate guidelines.
Problem

Research questions and friction points this paper is trying to address.

root cause analysis
large language models
LLM inference
failure diagnosis
observability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Root Cause Analysis
Large Language Models
LLM Inference
Failure Diagnosis
GPU-Driven Workloads
🔎 Similar Papers
No similar papers found.