🤖 AI Summary
Existing vision-language models struggle to uniformly model diverse query–target modality combinations (e.g., text→image-text, image-text→text), leading to insufficient cross-modal alignment and degraded inference performance. To address this, we propose a unified modality completion architecture: (1) a generative modality completion module that explicitly maps textual inputs to visual features, constructing modality-complete representations; (2) identification and quantification of inherent bias arising from skewed modality-combination distributions in training data, systematically mitigated via the completion paradigm; and (3) dual-path embedding alignment coupled with cross-modal consistency constraints to enhance robustness in contrastive learning. Our approach achieves state-of-the-art performance on multimodal retrieval and visual grounding tasks across all modality combinations, demonstrating exceptional stability and significantly suppressing performance fluctuations induced by data skew.
📝 Abstract
Current research has explored vision-language models for multi-modal embedding tasks, such as information retrieval, visual grounding, and classification. However, real-world scenarios often involve diverse modality combinations between queries and targets, such as text and image to text, text and image to text and image, and text to text and image. These diverse combinations pose significant challenges for existing models, as they struggle to align all modality combinations within a unified embedding space during training, which degrades performance at inference. To address this limitation, we propose UniMoCo, a novel vision-language model architecture designed for multi-modal embedding tasks. UniMoCo introduces a modality-completion module that generates visual features from textual inputs, ensuring modality completeness for both queries and targets. Additionally, we develop a specialized training strategy to align embeddings from both original and modality-completed inputs, ensuring consistency within the embedding space. This enables the model to robustly handle a wide range of modality combinations across embedding tasks. Experiments show that UniMoCo outperforms previous methods while demonstrating consistent robustness across diverse settings. More importantly, we identify and quantify the inherent bias in conventional approaches caused by imbalance of modality combinations in training data, which can be mitigated through our modality-completion paradigm. The code is available at https://github.com/HobbitQia/UniMoCo.