CLCR: Cross-Level Semantic Collaborative Representation for Multimodal Learning

📅 2026-02-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing multimodal fusion approaches often compress asynchronous, multi-level semantic information into a single latent space, which can lead to semantic misalignment and error propagation. To address this, this work proposes an explicit cross-hierarchical semantic coordination framework that constructs a three-tier semantic hierarchy to separately model shared and modality-specific features at each level. By integrating intra-level coordination exchange domains (IntraCED) and inter-level coordination aggregation domains (InterCAD), along with a learnable token budget and anchor synchronization mechanism, the method effectively constrains cross-modal attention to the shared subspace, enabling precise semantic alignment and disentangled representation learning. The proposed approach significantly outperforms state-of-the-art models across six benchmark tasks—including emotion recognition, event localization, and action recognition—demonstrating strong generalization capability.

Technology Category

Application Category

📝 Abstract
Multimodal learning aims to capture both shared and private information from multiple modalities. However, existing methods that project all modalities into a single latent space for fusion often overlook the asynchronous, multi-level semantic structure of multimodal data. This oversight induces semantic misalignment and error propagation, thereby degrading representation quality. To address this issue, we propose Cross-Level Co-Representation (CLCR), which explicitly organizes each modality's features into a three-level semantic hierarchy and specifies level-wise constraints for cross-modal interactions. First, a semantic hierarchy encoder aligns shallow, mid, and deep features across modalities, establishing a common basis for interaction. And then, at each level, an Intra-Level Co-Exchange Domain (IntraCED) factorizes features into shared and private subspaces and restricts cross-modal attention to the shared subspace via a learnable token budget. This design ensures that only shared semantics are exchanged and prevents leakage from private channels. To integrate information across levels, the Inter-Level Co-Aggregation Domain (InterCAD) synchronizes semantic scales using learned anchors, selectively fuses the shared representations, and gates private cues to form a compact task representation. We further introduce regularization terms to enforce separation of shared and private features and to minimize cross-level interference. Experiments on six benchmarks spanning emotion recognition, event localization, sentiment analysis, and action recognition show that CLCR achieves strong performance and generalizes well across tasks.
Problem

Research questions and friction points this paper is trying to address.

multimodal learning
semantic misalignment
multi-level semantic structure
error propagation
representation quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cross-Level Representation
Shared-Private Disentanglement
Multimodal Fusion
Semantic Hierarchy
Co-Attention with Token Budget
🔎 Similar Papers
No similar papers found.