🤖 AI Summary
Existing interpretability methods for speech-to-text generation overlook the autoregressive nature of the task, failing to deliver phoneme-level, temporally sensitive, and faithful explanations.
Method: This paper introduces feature attribution to autoregressive speech generation for the first time, proposing a spectrogram–text joint perturbation attribution framework. It applies localized perturbations in the spectrogram domain while incorporating gradients from already-generated tokens to produce fine-grained, phoneme-aligned explanations for each output token.
Contribution/Results: The approach transcends conventional classification-oriented interpretability paradigms, enabling cross-modal (speech→text) and temporally aware fidelity modeling. Experiments on ASR and speech translation demonstrate substantial improvements in explanation fidelity and human interpretability; expert evaluations yield outstanding scores. The framework establishes a novel paradigm for trustworthy deployment of large speech models.
📝 Abstract
Spurred by the demand for interpretable models, research on eXplainable AI for language technologies has experienced significant growth, with feature attribution methods emerging as a cornerstone of this progress. While prior work in NLP explored such methods for classification tasks and textual applications, explainability intersecting generation and speech is lagging, with existing techniques failing to account for the autoregressive nature of state-of-the-art models and to provide fine-grained, phonetically meaningful explanations. We address this gap by introducing Spectrogram Perturbation for Explainable Speech-to-text Generation (SPES), a feature attribution technique applicable to sequence generation tasks with autoregressive models. SPES provides explanations for each predicted token based on both the input spectrogram and the previously generated tokens. Extensive evaluation on speech recognition and translation demonstrates that SPES generates explanations that are faithful and plausible to humans.