A Framework for Evaluating Faithfulness in Explainable AI for Machine Anomalous Sound Detection Using Frequency-Band Perturbation

πŸ“… 2026-01-26
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the lack of quantitative evaluation of spectral attribution accuracy in existing explainable AI (XAI) methods for anomalous sound detection, which often rely on subjective visualizations. The authors propose the first objective evaluation framework based on band-wise masking perturbation, systematically removing frequency bands and measuring the resulting changes in model predictions to quantify the alignment between XAI attributions and the model’s true spectral sensitivity. The framework enables reproducible benchmarking and is applied to evaluate four prominent XAI methods: Integrated Gradients, Occlusion, Grad-CAM, and SmoothGrad. Experimental results demonstrate that Occlusion most accurately reflects the model’s spectral dependencies, whereas gradient-based methods yield unreliable attributions, thereby validating the necessity and effectiveness of the proposed framework.

Technology Category

Application Category

πŸ“ Abstract
Explainable AI (XAI) is commonly applied to anomalous sound detection (ASD) models to identify which time-frequency regions of an audio signal contribute to an anomaly decision. However, most audio explanations rely on qualitative inspection of saliency maps, leaving open the question of whether these attributions accurately reflect the spectral cues the model uses. In this work, we introduce a new quantitative framework for evaluating XAI faithfulness in machine-sound analysis by directly linking attribution relevance to model behaviour through systematic frequency-band removal. This approach provides an objective measure of whether an XAI method for machine ASD correctly identifies frequency regions that influence an ASD model's predictions. By using four widely adopted methods, namely Integrated Gradients, Occlusion, Grad-CAM and SmoothGrad, we show that XAI techniques differ in reliability, with Occlusion demonstrating the strongest alignment with true model sensitivity and gradient-+based methods often failing to accurately capture spectral dependencies. The proposed framework offers a reproducible way to benchmark audio explanations and enables more trustworthy interpretation of spectrogram-based ASD systems.
Problem

Research questions and friction points this paper is trying to address.

Explainable AI
Anomalous Sound Detection
Faithfulness Evaluation
Frequency-Band Perturbation
Saliency Maps
Innovation

Methods, ideas, or system contributions that make the work stand out.

Explainable AI
Anomalous Sound Detection
Faithfulness Evaluation
Frequency-Band Perturbation
Saliency Map Benchmarking
A
Alexander Buck
Computer Science, School of Science, Loughborough University, UK
G
Georgina Cosma
Computer Science, School of Science, Loughborough University, UK
Iain Phillips
Iain Phillips
Loughborough University
internet routinginternet performancewireless sensor networksadhoc networks
P
Paul Conway
School of Mechanical, Electrical and Manufacturing Engineering, Loughborough University, UK
P
Patrick Baker
Royal Air Force Rapid Capabilities Office, UK; Defence Science and Technology Laboratory, UK