🤖 AI Summary
This work addresses the cross-lingual generalization challenge in Parkinson’s disease (PD) speech detection under bilingual settings. We propose a bilingual-aware dual-head deep model that separately processes diadochokinetic (DDK) tasks and natural continuous speech, with input-driven dynamic routing. To enhance cross-lingual transfer, we introduce a cross-lingual adaptive layer and a fusion representation module integrating self-supervised learning (SSL) features, wavelet-based time-frequency representations, convolutional bottleneck architectures, contrastive learning, and adaptive normalization. Experiments on the Slovak EWA-DB and Spanish PC-GITA datasets demonstrate that our method significantly outperforms monolingual baselines and naive multilingual concatenation strategies, achieving an 8.2% improvement in cross-lingual average accuracy. To our knowledge, this is the first approach to simultaneously improve PD detection performance across two non-English languages, effectively overcoming the generalization limitations inherent in monolingual models.
📝 Abstract
This work aims to tackle the Parkinson's disease (PD) detection problem from the speech signal in a bilingual setting by proposing an ad-hoc dual-head deep neural architecture for type-based binary classification. One head is specialized for diadochokinetic patterns. The other head looks for natural speech patterns present in continuous spoken utterances. Only one of the two heads is operative accordingly to the nature of the input. Speech representations are extracted from self-supervised learning (SSL) models and wavelet transforms. Adaptive layers, convolutional bottlenecks, and contrastive learning are exploited to reduce variations across languages. Our solution is assessed against two distinct datasets, EWA-DB, and PC-GITA, which cover Slovak and Spanish languages, respectively. Results indicate that conventional models trained on a single language dataset struggle with cross-linguistic generalization, and naive combinations of datasets are suboptimal. In contrast, our model improves generalization on both languages, simultaneously.