A Frame-based Attention Interpretation Method for Relevant Acoustic Feature Extraction in Long Speech Depression Detection

📅 2024-06-05
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses two clinical bottlenecks in speech-based depression detection: high noise in segment-level annotations and poor model interpretability. Methodologically, we propose an interpretable speech foundation model: (1) a novel spectrogram-level Transformer architecture that models long-duration speech end-to-end, thereby circumventing noisy segment-level labels; and (2) a frame-level attention visualization mechanism that precisely localizes clinically meaningful acoustic segments—such as loudness decay and fundamental frequency (F0) decline—within the raw waveform. Experiments demonstrate statistically significant improvements over segment-level baselines. Attention heatmaps strongly align with clinical literature, corroborating reduced loudness and lowered F0 as key acoustic biomarkers of depression. To our knowledge, this is the first work to establish a closed-loop validation from model attention to interpretable, clinically grounded acoustic features—enhancing both the trustworthiness and clinical utility of speech-based depression screening tools.

Technology Category

Application Category

📝 Abstract
Speech-based depression detection tools could help early screening of depression. Here, we address two issues that may hinder the clinical practicality of such tools: segment-level labelling noise and a lack of model interpretability. We propose a speech-level Audio Spectrogram Transformer to avoid segment-level labelling. We observe that the proposed model significantly outperforms a segment-level model, providing evidence for the presence of segment-level labelling noise in audio modality and the advantage of longer-duration speech analysis for depression detection. We introduce a frame-based attention interpretation method to extract acoustic features from prediction-relevant waveform signals for interpretation by clinicians. Through interpretation, we observe that the proposed model identifies reduced loudness and F0 as relevant signals of depression, which aligns with the speech characteristics of depressed patients documented in clinical studies.
Problem

Research questions and friction points this paper is trying to address.

Detects depression using long-duration speech analysis
Identifies prediction-relevant acoustic features for clinical interpretation
Improves reliability by reducing segment-level labeling noise
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses long-duration speech for depression detection
Introduces interpretable Audio Spectrogram Transformer
Identifies acoustic features like loudness and F0
🔎 Similar Papers
No similar papers found.
Q
Qingkun Deng
School of Health in Social Science, The University of Edinburgh, UK
Saturnino Luz
Saturnino Luz
The University of Edinburgh
Digital BiomarkersPrecision MedicineDeep PhenotypingSpeech and Signal ProcessingMachine Learning
S
Sofia de la Fuente Garcia
School of Health in Social Science, The University of Edinburgh, UK