JFTA-Bench: Evaluate LLM's Ability of Tracking and Analyzing Malfunctions Using Fault Trees

📅 2026-03-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge that large language models (LLMs) cannot directly process fault trees represented as images by proposing a novel textual representation method that converts fault trees into structured, LLM-parsable text for the first time. Leveraging this representation, the authors construct a high-quality benchmark dataset comprising 3,130 multi-turn dialogues with an average of 40.75 turns per conversation, incorporating a long-range error rollback and recovery mechanism to simulate real-world user misoperations during interactive troubleshooting. Experimental results demonstrate that the proposed approach effectively enables LLMs to perform fault localization and tracing in complex systems. Among evaluated models, Gemini 2.5 Pro achieves the best performance on this benchmark, validating the effectiveness of the framework in enhancing the robustness of interactive diagnostic capabilities.

Technology Category

Application Category

📝 Abstract
In the maintenance of complex systems, fault trees are used to locate problems and provide targeted solutions. To enable fault trees stored as images to be directly processed by large language models, which can assist in tracking and analyzing malfunctions, we propose a novel textual representation of fault trees. Building on it, we construct a benchmark for multi-turn dialogue systems that emphasizes robust interaction in complex environments, evaluating a model's ability to assist in malfunction localization, which contains $3130$ entries and $40.75$ turns per entry on average. We train an end-to-end model to generate vague information to reflect user behavior and introduce long-range rollback and recovery procedures to simulate user error scenarios, enabling assessment of a model's integrated capabilities in task tracking and error recovery, and Gemini 2.5 pro archives the best performance.
Problem

Research questions and friction points this paper is trying to address.

fault tree
large language model
malfunction analysis
multi-turn dialogue
error recovery
Innovation

Methods, ideas, or system contributions that make the work stand out.

fault tree representation
multi-turn dialogue benchmark
LLM-based malfunction analysis
error recovery simulation
vague user behavior modeling
🔎 Similar Papers
No similar papers found.
Y
Yuhui Wang
Fudan University
Z
Zhixiong Yang
Fudan University
Ming Zhang
Ming Zhang
复旦大学计算机科学技术学院
LLM
Shihan Dou
Shihan Dou
Fudan University
LLMsCode LMsRLAlignment
Zhiheng Xi
Zhiheng Xi
Fudan University
LLM ReasoningLLM-based Agents
E
Enyu Zhou
Fudan University
Senjie Jin
Senjie Jin
Fudan University
natural language processing
Y
Yujiong Shen
Fudan University
D
Dingwei Zhu
Fudan University
Y
Yi Dong
Fudan University
T
Tao Gui
Fudan University
Qi Zhang
Qi Zhang
Fudan University
SAGINsatellite routing
X
Xuanjing Huang
Fudan University