BrokenBind: Universal Modality Exploration beyond Dataset Boundaries

πŸ“… 2026-02-06
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the limited generalizability of existing multimodal learning methods, which are typically constrained to fixed modality combinations within a single dataset and struggle with unseen modalities or cross-dataset scenarios. To overcome this, the authors propose a shared-modality bridging mechanism that aligns common modalities across multiple distributionally mismatched datasets and generates pseudo-embeddings for missing modalities. This enables arbitrary cross-dataset modality pairing and facilitates universal joint representation learning. The approach eliminates the reliance on predefined modality pairings and single-data-source assumptions inherent in conventional methods, thereby supporting effective generalization under low-data regimes, across multiple datasets, and with flexible modality configurations. Extensive experiments demonstrate significant performance gains over state-of-the-art methods across various benchmarks and low-resource settings, validating the framework’s efficacy in modality generalization and cross-dataset collaborative learning.

Technology Category

Application Category

πŸ“ Abstract
Multi-modal learning combines various modalities to provide a comprehensive understanding of real-world problems. A common strategy is to directly bind different modalities together in a specific joint embedding space. However, the capability of existing methods is restricted within the modalities presented in the given dataset, thus they are biased when generalizing to unpresented modalities in downstream tasks. As a result, due to such inflexibility, the viability of previous methods is seriously hindered by the cost of acquiring multi-modal datasets. In this paper, we introduce BrokenBind, which focuses on binding modalities that are presented from different datasets. To achieve this, BrokenBind simultaneously leverages multiple datasets containing the modalities of interest and one shared modality. Though the two datasets do not correspond to each other due to distribution mismatch, we can capture their relationship to generate pseudo embeddings to fill in the missing modalities of interest, enabling flexible and generalized multi-modal learning. Under our framework, any two modalities can be bound together, free from the dataset limitation, to achieve universal modality exploration. Further, to reveal the capability of our method, we study intensified scenarios where more than two datasets are needed for modality binding and show the effectiveness of BrokenBind in low-data regimes. Through extensive evaluation, we carefully justify the superiority of BrokenBind compared to well-known multi-modal baseline methods.
Problem

Research questions and friction points this paper is trying to address.

multi-modal learning
modality generalization
dataset limitation
cross-dataset binding
missing modalities
Innovation

Methods, ideas, or system contributions that make the work stand out.

universal modality exploration
cross-dataset binding
pseudo embeddings
multi-modal learning
modality generalization
πŸ”Ž Similar Papers
No similar papers found.