🤖 AI Summary
Low algorithmic trustworthiness and insufficient operator confidence hinder intelligent diagnosis of sudden anomalies (e.g., leaks, contamination) in water distribution networks.
Method: This paper proposes an explainable event diagnosis framework centered on the novel concept of “counterfactual event fingerprints”—a first-of-its-kind construct that quantifies and visualizes the divergence between the current diagnostic output and its nearest alternative explanation, thereby revealing model decision rationale. The framework integrates counterfactual reasoning, fault diagnosis algorithms, and graphical explanation techniques.
Contribution/Results: Evaluated on the L-Town benchmark and a real-scale water network, the framework demonstrates robust performance. Experiments show significant improvements in operators’ comprehension of algorithmic logic and their trust in automated outputs, enabling more accurate and reliable human–AI collaborative decision-making for anomaly response.
📝 Abstract
The increasing penetration of information and communication technologies in the design, monitoring, and control of water systems enables the use of algorithms for detecting and identifying unanticipated events (such as leakages or water contamination) using sensor measurements. However, data-driven methodologies do not always give accurate results and are often not trusted by operators, who may prefer to use their engineering judgment and experience to deal with such events. In this work, we propose a framework for interpretable event diagnosis -- an approach that assists the operators in associating the results of algorithmic event diagnosis methodologies with their own intuition and experience. This is achieved by providing contrasting (i.e., counterfactual) explanations of the results provided by fault diagnosis algorithms; their aim is to improve the understanding of the algorithm's inner workings by the operators, thus enabling them to take a more informed decision by combining the results with their personal experiences. Specifically, we propose counterfactual event fingerprints, a representation of the difference between the current event diagnosis and the closest alternative explanation, which can be presented in a graphical way. The proposed methodology is applied and evaluated on a realistic use case using the L-Town benchmark.