🤖 AI Summary
This work addresses the challenge that large language models (LLMs) cannot directly process fault trees represented as images by proposing a novel textual representation method that converts fault trees into structured, LLM-parsable text for the first time. Leveraging this representation, the authors construct a high-quality benchmark dataset comprising 3,130 multi-turn dialogues with an average of 40.75 turns per conversation, incorporating a long-range error rollback and recovery mechanism to simulate real-world user misoperations during interactive troubleshooting. Experimental results demonstrate that the proposed approach effectively enables LLMs to perform fault localization and tracing in complex systems. Among evaluated models, Gemini 2.5 Pro achieves the best performance on this benchmark, validating the effectiveness of the framework in enhancing the robustness of interactive diagnostic capabilities.
📝 Abstract
In the maintenance of complex systems, fault trees are used to locate problems and provide targeted solutions. To enable fault trees stored as images to be directly processed by large language models, which can assist in tracking and analyzing malfunctions, we propose a novel textual representation of fault trees. Building on it, we construct a benchmark for multi-turn dialogue systems that emphasizes robust interaction in complex environments, evaluating a model's ability to assist in malfunction localization, which contains $3130$ entries and $40.75$ turns per entry on average. We train an end-to-end model to generate vague information to reflect user behavior and introduce long-range rollback and recovery procedures to simulate user error scenarios, enabling assessment of a model's integrated capabilities in task tracking and error recovery, and Gemini 2.5 pro archives the best performance.