Better Together: Leveraging Unpaired Multimodal Data for Stronger Unimodal Models

📅 2025-10-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional multimodal learning relies on paired data to construct unified representations, yet leveraging **unpaired auxiliary modality data** to enhance target modality representation learning remains largely unexplored. This paper proposes the **Unimodal-agnostic Learning (UML) paradigm**, which implicitly captures structural correlations across modalities via parameter sharing and cross-modal alternating training—enabling knowledge transfer without explicit alignment. Under a linear generative assumption, theoretical analysis demonstrates that auxiliary modalities provide richer generative supervision signals. Methodologically, UML integrates contrastive learning with self-supervised strategies, ensuring strong generalizability. Empirically, incorporating unpaired auxiliary data—such as text, audio, or images—yields consistent and significant performance gains on diverse downstream tasks, including image classification and audio recognition. These results validate UML’s effectiveness, modality-agnostic design, and broad applicability across single-modality learning scenarios.

Technology Category

Application Category

📝 Abstract
Traditional multimodal learners find unified representations for tasks like visual question answering, but rely heavily on paired datasets. However, an overlooked yet potentially powerful question is: can one leverage auxiliary unpaired multimodal data to directly enhance representation learning in a target modality? We introduce UML: Unpaired Multimodal Learner, a modality-agnostic training paradigm in which a single model alternately processes inputs from different modalities while sharing parameters across them. This design exploits the assumption that different modalities are projections of a shared underlying reality, allowing the model to benefit from cross-modal structure without requiring explicit pairs. Theoretically, under linear data-generating assumptions, we show that unpaired auxiliary data can yield representations strictly more informative about the data-generating process than unimodal training. Empirically, we show that using unpaired data from auxiliary modalities -- such as text, audio, or images -- consistently improves downstream performance across diverse unimodal targets such as image and audio. Our project page: https://unpaired-multimodal.github.io/
Problem

Research questions and friction points this paper is trying to address.

Leveraging unpaired multimodal data to enhance unimodal representation learning
Developing modality-agnostic training with shared parameters across modalities
Improving downstream performance using auxiliary unpaired text, audio, or images
Innovation

Methods, ideas, or system contributions that make the work stand out.

UML trains single model with shared parameters
Alternately processes inputs from different modalities
Exploits cross-modal structure without paired data
🔎 Similar Papers