π€ AI Summary
Non-verbal voice-based emotion recognition (NVER) demands effective modeling of temporal dynamics and intrinsic structural patterns in emotional prosody. Method: This work pioneers the exploration of Mamba-based state-space models as audio foundation models (MAFMs) for NVER, leveraging their superior capacity for long-range temporal dependency modeling. We propose RENOβa novel fusion framework featuring (i) a RΓ©nyi divergence-based alignment loss to harmonize heterogeneous representations between MAFMs and Transformer-based attention models, and (ii) a self-attention-driven cross-modal representation interaction mechanism. Contribution/Results: MAFMs alone surpass existing state-of-the-art (SOTA) attention models in unimodal performance. RENO achieves new SOTA on major NVER benchmarks, notably enhancing discrimination of subtle emotions (e.g., anxiety, fatigue). These results empirically validate the efficacy and innovation of state-space modeling combined with controllable alignment-based fusion for NVER.
π Abstract
In this work, we focus on non-verbal vocal sounds emotion recognition (NVER). We investigate mamba-based audio foundation models (MAFMs) for the first time for NVER and hypothesize that MAFMs will outperform attention-based audio foundation models (AAFMs) for NVER by leveraging its state-space modeling to capture intrinsic emotional structures more effectively. Unlike AAFMs, which may amplify irrelevant patterns due to their attention mechanisms, MAFMs will extract more stable and context-aware representations, enabling better differentiation of subtle non-verbal emotional cues. Our experiments with state-of-the-art (SOTA) AAFMs and MAFMs validates our hypothesis. Further, motivated from related research such as speech emotion recognition, synthetic speech detection, where fusion of foundation models (FMs) have showed improved performance, we also explore fusion of FMs for NVER. To this end, we propose, RENO, that uses renyi-divergence as a novel loss function for effective alignment of the FMs. It also makes use of self-attention for better intra-representation interaction of the FMs. With RENO, through the heterogeneous fusion of MAFMs and AAFMs, we show the topmost performance in comparison to individual FMs, its fusion and also setting SOTA in comparison to previous SOTA work.