InfoMAE: Pair-Efficient Cross-Modal Alignment for Multimodal Time-Series Sensing Signals

📅 2025-04-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In IoT applications, the scarcity of high-quality paired multimodal time-series signals severely limits the performance of self-supervised learning. Method: This paper proposes an information-theoretic cross-modal alignment framework designed to enhance multimodal data utilization by improving unimodal pre-trained representations. It innovatively integrates joint-distribution-level and instance-level alignment mechanisms within a masked autoencoder (MAE) architecture, jointly optimizing mutual information maximization, contrastive learning, and distribution matching losses. Contribution/Results: The method achieves highly effective cross-modal alignment with only a minimal number of modality pairings. On real-world IoT downstream tasks, it improves multimodal performance by over 60% and boosts average unimodal accuracy by 22%, significantly reducing reliance on paired multimodal data while maintaining robust representation learning.

Technology Category

Application Category

📝 Abstract
Standard multimodal self-supervised learning (SSL) algorithms regard cross-modal synchronization as implicit supervisory labels during pretraining, thus posing high requirements on the scale and quality of multimodal samples. These constraints significantly limit the performance of sensing intelligence in IoT applications, as the heterogeneity and the non-interpretability of time-series signals result in abundant unimodal data but scarce high-quality multimodal pairs. This paper proposes InfoMAE, a cross-modal alignment framework that tackles the challenge of multimodal pair efficiency under the SSL setting by facilitating efficient cross-modal alignment of pretrained unimodal representations. InfoMAE achieves extit{efficient cross-modal alignment} with extit{limited data pairs} through a novel information theory-inspired formulation that simultaneously addresses distribution-level and instance-level alignment. Extensive experiments on two real-world IoT applications are performed to evaluate InfoMAE's pairing efficiency to bridge pretrained unimodal models into a cohesive joint multimodal model. InfoMAE enhances downstream multimodal tasks by over 60% with significantly improved multimodal pairing efficiency. It also improves unimodal task accuracy by an average of 22%.
Problem

Research questions and friction points this paper is trying to address.

Addresses scarcity of high-quality multimodal pairs in SSL
Improves cross-modal alignment with limited data pairs
Enhances multimodal and unimodal task performance significantly
Innovation

Methods, ideas, or system contributions that make the work stand out.

Efficient cross-modal alignment with limited data pairs
Information theory-inspired distribution and instance alignment
Bridges pretrained unimodal into cohesive joint model
🔎 Similar Papers
No similar papers found.