🤖 AI Summary
This work proposes DIVINE, a multimodal framework for precise diagnosis and severity assessment of orofacial neurological disorders such as ALS and stroke, by integrating speech and facial video data. Leveraging foundational audiovisual models including DeepSeek-VL2 and TRILLsson, DIVINE explicitly disentangles shared and modality-specific features through a hierarchical variational bottleneck, an adaptive sparse gating fusion mechanism, and learnable symptom tokens to enable joint multitask prediction. By uniquely combining cross-modal disentanglement with symptom token learning, the framework significantly enhances model interpretability and generalization under single-modality constraints. Evaluated on the Toronto NeuroFace dataset, DIVINE achieves an accuracy of 98.26% and an F1 score of 97.51%, substantially outperforming existing unimodal approaches and conventional fusion methods.
📝 Abstract
In this study, we present a multimodal framework for predicting neuro-facial disorders by capturing both vocal and facial cues. We hypothesize that explicitly disentangling shared and modality-specific representations within multimodal foundation model embeddings can enhance clinical interpretability and generalization. To validate this hypothesis, we propose DIVINE a fully disentangled multimodal framework that operates on representations extracted from state-of-the-art (SOTA) audio and video foundation models, incorporating hierarchical variational bottlenecks, sparse gated fusion, and learnable symptom tokens. DIVINE operates in a multitask learning setup to jointly predict diagnostic categories (Healthy Control,ALS, Stroke) and severity levels (Mild, Moderate, Severe). The model is trained using synchronized audio and video inputs and evaluated on the Toronto NeuroFace dataset under full (audio-video) as well as single-modality (audio- only and video-only) test conditions. Our proposed approach, DIVINE achieves SOTA result, with the DeepSeek-VL2 and TRILLsson combination reaching 98.26% accuracy and 97.51% F1-score. Under modality-constrained scenarios, the framework performs well, showing strong generalization when tested with video-only or audio-only inputs. It consistently yields superior performance compared to unimodal models and baseline fusion techniques. To the best of our knowledge, DIVINE is the first framework that combines cross-modal disentanglement, adaptive fusion, and multitask learning to comprehensively assess neurological disorders using synchronized speech and facial video.