Explainable AI for microseismic event detection

📅 2025-10-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the low interpretability and credibility of deep learning models (e.g., PhaseNet) in microseismic event detection—stemming from their “black-box” nature—this work pioneers the systematic integration of eXplainable Artificial Intelligence (XAI) into this domain. We propose a SHAP-gated inference mechanism that jointly leverages Grad-CAM and SHAP to quantify attention distributions and feature contributions for P- and S-wave arrivals, while incorporating seismic wave physics priors to validate decision rationality. This approach not only renders model decisions transparent but also enables dynamic suppression of false positives via explanatory feedback, substantially enhancing robustness. Evaluated on a real-world test set comprising 9,000 waveforms, the SHAP-gated model achieves an F1-score of 0.98 (precision: 0.99; recall: 0.97), outperforming the PhaseNet baseline with improved noise resilience and operational stability.

Technology Category

Application Category

📝 Abstract
Deep neural networks like PhaseNet show high accuracy in detecting microseismic events, but their black-box nature is a concern in critical applications. We apply explainable AI (XAI) techniques, such as Gradient-weighted Class Activation Mapping (Grad-CAM) and Shapley Additive Explanations (SHAP), to interpret the PhaseNet model's decisions and improve its reliability. Grad-CAM highlights that the network's attention aligns with P- and S-wave arrivals. SHAP values quantify feature contributions, confirming that vertical-component amplitudes drive P-phase picks while horizontal components dominate S-phase picks, consistent with geophysical principles. Leveraging these insights, we introduce a SHAP-gated inference scheme that combines the model's output with an explanation-based metric to reduce errors. On a test set of 9,000 waveforms, the SHAP-gated model achieved an F1-score of 0.98 (precision 0.99, recall 0.97), outperforming the baseline PhaseNet (F1-score 0.97) and demonstrating enhanced robustness to noise. These results show that XAI can not only interpret deep learning models but also directly enhance their performance, providing a template for building trust in automated seismic detectors.
Problem

Research questions and friction points this paper is trying to address.

Interpreting black-box neural networks for microseismic detection
Quantifying feature contributions to P- and S-wave picks
Improving model reliability through explanation-gated inference
Innovation

Methods, ideas, or system contributions that make the work stand out.

Applied Grad-CAM and SHAP to interpret PhaseNet decisions
Introduced SHAP-gated inference combining output with explanation metric
Enhanced model robustness and performance on seismic waveforms
🔎 Similar Papers
No similar papers found.