π€ AI Summary
This work addresses the challenge of opaque decision-making in deep learning models for disaster management, which often undermines user trust during emergency response. To enhance interpretability, the authors propose an end-to-end explainable framework tailored for flood segmentation (using PIDNet) and vehicle detection (using YOLO), integrating an improved Layer-wise Relevance Propagation (LRP) method with Prototype Concept Explanation (PCX). Key innovations include the first application of PCX to segmentation and detection tasks in disaster scenarios and a novel LRP relevance redistribution strategy designed specifically for sigmoid-gated fusion layers. Experiments on public flood datasets demonstrate that the approach delivers reliable concept-level local and global explanations while maintaining near real-time inference performance, making it suitable for deployment on resource-constrained drone platforms.
π Abstract
Deep learning models for flood and wildfire segmentation and object detection enable precise, real-time disaster localization when deployed on embedded drone platforms. However, in natural disaster management, the lack of transparency in their decision-making process hinders human trust required for emergency response. To address this, we present an explainability framework for understanding flood segmentation and car detection predictions on the widely used PIDNet and YOLO architectures. More specifically, we introduce a novel redistribution strategy that extends Layer-wise Relevance Propagation (LRP) explanations for sigmoid-gated element-wise fusion layers. This extension allows LRP relevances to flow through the fusion modules of PIDNet, covering the entire computation graph back to the input image. Furthermore, we apply Prototypical Concept-based Explanations (PCX) to provide both local and global explanations at the concept level, revealing which learned features drive the segmentation and detection of specific disaster semantic classes. Experiments on a publicly available flood dataset show that our framework provides reliable and interpretable explanations while maintaining near real-time inference capabilities, rendering it suitable for deployment on resource-constrained platforms, such as Unmanned Aerial Vehicles (UAVs).