OpsEval: A Comprehensive IT Operations Benchmark Suite for Large Language Models

📅 2023-10-11
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of systematic evaluation of large language models (LLMs) in AIOps scenarios. We introduce AIOpsBench—the first multilingual, multitask benchmark for IT operations—covering three core tasks: fault root-cause analysis, operations script generation, and alert summarization, with 7,184 multiple-choice and 1,736 open-ended questions. Methodologically, we propose the first systematic taxonomy of Ops capabilities; enforce rigorous expert validation and strict test-set isolation to ensure reliability; and integrate hallucination detection, automated QA evaluation, and a dynamic leaderboard. Key contributions include: empirical characterization of how model scale, quantization, and training strategies affect operational competence; open-sourcing 20% high-quality annotated data; and releasing a real-time, multilingual performance leaderboard. All benchmark data and evaluation framework are publicly available.
📝 Abstract
Information Technology (IT) Operations (Ops), particularly Artificial Intelligence for IT Operations (AIOps), is the guarantee for maintaining the orderly and stable operation of existing information systems. According to Gartner's prediction, the use of AI technology for automated IT operations has become a new trend. Large language models (LLMs) that have exhibited remarkable capabilities in NLP-related tasks, are showing great potential in the field of AIOps, such as in aspects of root cause analysis of failures, generation of operations and maintenance scripts, and summarizing of alert information. Nevertheless, the performance of current LLMs in Ops tasks is yet to be determined. In this paper, we present OpsEval, a comprehensive task-oriented Ops benchmark designed for LLMs. For the first time, OpsEval assesses LLMs' proficiency in various crucial scenarios at different ability levels. The benchmark includes 7184 multi-choice questions and 1736 question-answering (QA) formats in English and Chinese. By conducting a comprehensive performance evaluation of the current leading large language models, we show how various LLM techniques can affect the performance of Ops, and discussed findings related to various topics, including model quantification, QA evaluation, and hallucination issues. To ensure the credibility of our evaluation, we invite dozens of domain experts to manually review our questions. At the same time, we have open-sourced 20% of the test QA to assist current researchers in preliminary evaluations of their OpsLLM models. The remaining 80% of the data, which is not disclosed, is used to eliminate the issue of the test set leakage. Additionally, we have constructed an online leaderboard that is updated in real-time and will continue to be updated, ensuring that any newly emerging LLMs will be evaluated promptly. Both our dataset and leaderboard have been made public.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs' performance in IT operations tasks
Assessing LLMs' abilities in AIOps scenarios
Providing benchmark for OpsLLM model evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Comprehensive Ops benchmark for LLMs
Multi-choice and QA formats evaluation
Real-time online leaderboard updates
🔎 Similar Papers
No similar papers found.
Y
Yuhe Liu
Tsinghua University
C
Changhua Pei
Chinese Academy of Sciences
L
Longlong Xu
Tsinghua University
Bohan Chen
Bohan Chen
University of Liverpool
Artificial IntelligenceGenerative Models
Mingze Sun
Mingze Sun
Tsinghua University
computer visiongraphics
Z
Zhirui Zhang
Beijing University of Posts and Telecommunications
Yongqian Sun
Yongqian Sun
Nankai University
AIOpsAnomaly DetectionFailure LocalizationMicroservices Fault DiagnosisRoot Cause Analysis
Shenglin Zhang
Shenglin Zhang
Nankai University
AI Operations in general
K
Kun Wang
Tsinghua University
H
Haiming Zhang
Chinese Academy of Sciences
J
Jianhui Li
Chinese Academy of Sciences
G
Gaogang Xie
Chinese Academy of Sciences
X
Xidao Wen
Tsinghua University
Xiaohui Nie
Xiaohui Nie
Associate Professor, Computer Network Information Center, CAS
AIOpsAI for Networking
Minghua Ma
Minghua Ma
Microsoft
AIOpsCloud Intelligence
Dan Pei
Dan Pei
Associate Professor of Computer Science, Tsinghua University
AIOpsTime Series Intelligence