π€ AI Summary
This work addresses the limited generalizability of existing multimodal learning methods, which are typically constrained to fixed modality combinations within a single dataset and struggle with unseen modalities or cross-dataset scenarios. To overcome this, the authors propose a shared-modality bridging mechanism that aligns common modalities across multiple distributionally mismatched datasets and generates pseudo-embeddings for missing modalities. This enables arbitrary cross-dataset modality pairing and facilitates universal joint representation learning. The approach eliminates the reliance on predefined modality pairings and single-data-source assumptions inherent in conventional methods, thereby supporting effective generalization under low-data regimes, across multiple datasets, and with flexible modality configurations. Extensive experiments demonstrate significant performance gains over state-of-the-art methods across various benchmarks and low-resource settings, validating the frameworkβs efficacy in modality generalization and cross-dataset collaborative learning.
π Abstract
Multi-modal learning combines various modalities to provide a comprehensive understanding of real-world problems. A common strategy is to directly bind different modalities together in a specific joint embedding space. However, the capability of existing methods is restricted within the modalities presented in the given dataset, thus they are biased when generalizing to unpresented modalities in downstream tasks. As a result, due to such inflexibility, the viability of previous methods is seriously hindered by the cost of acquiring multi-modal datasets. In this paper, we introduce BrokenBind, which focuses on binding modalities that are presented from different datasets. To achieve this, BrokenBind simultaneously leverages multiple datasets containing the modalities of interest and one shared modality. Though the two datasets do not correspond to each other due to distribution mismatch, we can capture their relationship to generate pseudo embeddings to fill in the missing modalities of interest, enabling flexible and generalized multi-modal learning. Under our framework, any two modalities can be bound together, free from the dataset limitation, to achieve universal modality exploration. Further, to reveal the capability of our method, we study intensified scenarios where more than two datasets are needed for modality binding and show the effectiveness of BrokenBind in low-data regimes. Through extensive evaluation, we carefully justify the superiority of BrokenBind compared to well-known multi-modal baseline methods.