DeepMLF: Multimodal language model with learnable tokens for deep fusion in sentiment analysis

πŸ“… 2025-04-15
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing multimodal sentiment analysis (MSA) methods suffer from insufficient fusion depth and ambiguous allocation of multimodal capabilities. Method: We propose a learnable fusion token–driven deep cross-layer fusion architecture, embedding controllable audio-visual interaction pathways and unimodal pathways into a pretrained language model (LM). We introduce a hierarchical learnable fusion token mechanism to jointly enable modality interaction and information isolation; identify optimal fusion depth (layers 5–7) and a lightweight token set (∼20 tokens); and design a fusion-aware curriculum learning paradigm. Multimodal information is dynamically integrated via LM self-attention and multimodal cross-attention, with joint optimization of modality-specific losses and the language modeling objective. Contribution/Results: Our method achieves state-of-the-art performance on three major MSA benchmarks. Ablation studies validate the efficacy of the fusion design, gating mechanism, and multi-objective training, and confirm scalability to large language models.

Technology Category

Application Category

πŸ“ Abstract
While multimodal fusion has been extensively studied in Multimodal Sentiment Analysis (MSA), the role of fusion depth and multimodal capacity allocation remains underexplored. In this work, we position fusion depth, scalability, and dedicated multimodal capacity as primary factors for effective fusion. We introduce DeepMLF, a novel multimodal language model (LM) with learnable tokens tailored toward deep fusion. DeepMLF leverages an audiovisual encoder and a pretrained decoder LM augmented with multimodal information across its layers. We append learnable tokens to the LM that: 1) capture modality interactions in a controlled fashion and 2) preserve independent information flow for each modality. These fusion tokens gather linguistic information via causal self-attention in LM Blocks and integrate with audiovisual information through cross-attention MM Blocks. Serving as dedicated multimodal capacity, this design enables progressive fusion across multiple layers, providing depth in the fusion process. Our training recipe combines modality-specific losses and language modelling loss, with the decoder LM tasked to predict ground truth polarity. Across three MSA benchmarks with varying dataset characteristics, DeepMLF achieves state-of-the-art performance. Our results confirm that deeper fusion leads to better performance, with optimal fusion depths (5-7) exceeding those of existing approaches. Additionally, our analysis on the number of fusion tokens reveals that small token sets ($sim$20) achieve optimal performance. We examine the importance of representation learning order (fusion curriculum) through audiovisual encoder initialization experiments. Our ablation studies demonstrate the superiority of the proposed fusion design and gating while providing a holistic examination of DeepMLF's scalability to LLMs, and the impact of each training objective and embedding regularization.
Problem

Research questions and friction points this paper is trying to address.

Explores fusion depth and capacity in multimodal sentiment analysis.
Introduces DeepMLF for deep fusion with learnable tokens.
Achieves state-of-the-art performance across MSA benchmarks.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Learnable tokens for controlled modality interactions
Progressive fusion across multiple deep layers
Combined modality-specific and language modeling losses
πŸ”Ž Similar Papers
No similar papers found.