🤖 AI Summary
Automatic diagnosis of voice disorders is hindered by the scarcity of pathological speech data and the heterogeneity across multi-source recordings (e.g., sentence reading, sustained vowel phonation). To address this, we propose the first raw-waveform-based multi-source speech fusion framework. Our method introduces a novel intermediate-layer feature fusion strategy to enable cross-lingual complementary modeling of pathological characteristics; it employs an end-to-end Transformer architecture that directly processes waveforms, integrating three hierarchical fusion mechanisms: waveform concatenation, intermediate feature fusion, and decision-level fusion. Evaluated on German, Portuguese, and Italian datasets, our approach achieves up to a 13% improvement in AUC over single-source baselines. This work establishes a scalable, non-invasive solution for low-resource, multi-source voice pathology detection.
📝 Abstract
Voice disorders significantly impact patient quality of life, yet non-invasive automated diagnosis remains under-explored due to both the scarcity of pathological voice data, and the variability in recording sources. This work introduces MVP (Multi-source Voice Pathology detection), a novel approach that leverages transformers operating directly on raw voice signals. We explore three fusion strategies to combine sentence reading and sustained vowel recordings: waveform concatenation, intermediate feature fusion, and decision-level combination. Empirical validation across the German, Portuguese, and Italian languages shows that intermediate feature fusion using transformers best captures the complementary characteristics of both recording types. Our approach achieves up to +13% AUC improvement over single-source methods.