🤖 AI Summary
Audio deepfake detection suffers from the absence of reliable ground-truth explanations, and existing interpretability methods (e.g., SHAP, LRP) fail to precisely localize artifact regions. Method: We propose a novel supervised interpretability learning framework that leverages the time-frequency difference signal—computed via STFT between real audio and its vocoder-synthesized counterpart—as a learnable, pixel-level surrogate ground truth for forgery artifacts. This differential signal is first introduced as explicit supervision for artifact localization, and high-fidelity, fine-grained explanation maps are generated using a diffusion model. Contribution/Results: Our approach overcomes inherent limitations of conventional attribution methods in the audio domain. Experiments on VocV4 and LibriSeVoc demonstrate that our method achieves both qualitatively and quantitatively superior interpretability performance over SHAP, LRP, and other baselines, establishing new state-of-the-art (SOTA) results.
📝 Abstract
Evaluating explainability techniques, such as SHAP and LRP, in the context of audio deepfake detection is challenging due to lack of clear ground truth annotations. In the cases when we are able to obtain the ground truth, we find that these methods struggle to provide accurate explanations. In this work, we propose a novel data-driven approach to identify artifact regions in deepfake audio. We consider paired real and vocoded audio, and use the difference in time-frequency representation as the ground-truth explanation. The difference signal then serves as a supervision to train a diffusion model to expose the deepfake artifacts in a given vocoded audio. Experimental results on the VocV4 and LibriSeVoc datasets demonstrate that our method outperforms traditional explainability techniques, both qualitatively and quantitatively.