Dynamic Fusion Multimodal Network for SpeechWellness Detection

📅 2025-08-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limited representational capacity of unimodal (text- or speech-only) modeling in adolescent suicide risk prediction, this paper proposes a lightweight dynamic multimodal fusion network. Methodologically, it jointly models speech temporal and time-frequency domain features alongside textual semantic representations, incorporating a learnable dynamic weighting module to enable adaptive cross-modal interaction, and integrates model compression techniques to enhance parameter efficiency. Compared to baseline models, the proposed approach reduces parameter count by 78% while improving classification accuracy by 5%, achieving state-of-the-art performance on the SpeechWellness Detection Challenge. This work represents the first effort to synergistically combine lightweight architectural design with dynamic multimodal fusion for adolescent suicide risk identification—thereby ensuring practical deployability without compromising discriminative capability.

Technology Category

Application Category

📝 Abstract
Suicide is one of the leading causes of death among adolescents. Previous suicide risk prediction studies have primarily focused on either textual or acoustic information in isolation, the integration of multimodal signals, such as speech and text, offers a more comprehensive understanding of an individual's mental state. Motivated by this, and in the context of the 1st SpeechWellness detection challenge, we explore a lightweight multi-branch multimodal system based on a dynamic fusion mechanism for speechwellness detection. To address the limitation of prior approaches that rely on time-domain waveforms for acoustic analysis, our system incorporates both time-domain and time-frequency (TF) domain acoustic features, as well as semantic representations. In addition, we introduce a dynamic fusion block to adaptively integrate information from different modalities. Specifically, it applies learnable weights to each modality during the fusion process, enabling the model to adjust the contribution of each modality. To enhance computational efficiency, we design a lightweight structure by simplifying the original baseline model. Experimental results demonstrate that the proposed system exhibits superior performance compared to the challenge baseline, achieving a 78% reduction in model parameters and a 5% improvement in accuracy.
Problem

Research questions and friction points this paper is trying to address.

Detecting suicide risk using multimodal speech and text signals
Integrating time-domain and time-frequency acoustic features adaptively
Reducing computational complexity while improving detection accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic fusion mechanism for multimodal integration
Time-domain and time-frequency acoustic features
Lightweight structure with reduced parameters
🔎 Similar Papers
No similar papers found.