MCA: Modality Composition Awareness for Robust Composed Multimodal Retrieval

📅 2025-10-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Unified encoders in multimodal retrieval suffer from poor out-of-distribution (OOD) robustness due to modality shortcuts—overreliance on spurious unimodal cues rather than genuine cross-modal semantics. Method: We propose a modality-composition-aware framework that explicitly models the hierarchical relationship between multimodal representations and their unimodal constituents to suppress shortcut learning. Specifically, we introduce a preference loss to encourage reliance on cross-modal synergy over single-modality signals, and design a composition regularization objective—incorporating prototype alignment—to enforce semantic understanding of modality composition structures. Integrated into a multimodal large language model as the unified encoder, our approach combines contrastive learning with these mechanisms. Contribution/Results: Our method significantly improves OOD robustness across multiple retrieval benchmarks. It is the first to systematically incorporate explicit modality composition modeling into unified encoder training, thereby enhancing generalization capability without architectural modification.

Technology Category

Application Category

📝 Abstract
Multimodal retrieval, which seeks to retrieve relevant content across modalities such as text or image, supports applications from AI search to contents production. Despite the success of separate-encoder approaches like CLIP align modality-specific embeddings with contrastive learning, recent multimodal large language models (MLLMs) enable a unified encoder that directly processes composed inputs. While flexible and advanced, we identify that unified encoders trained with conventional contrastive learning are prone to learn modality shortcut, leading to poor robustness under distribution shifts. We propose a modality composition awareness framework to mitigate this issue. Concretely, a preference loss enforces multimodal embeddings to outperform their unimodal counterparts, while a composition regularization objective aligns multimodal embeddings with prototypes composed from its unimodal parts. These objectives explicitly model structural relationships between the composed representation and its unimodal counterparts. Experiments on various benchmarks show gains in out-of-distribution retrieval, highlighting modality composition awareness as a effective principle for robust composed multimodal retrieval when utilizing MLLMs as the unified encoder.
Problem

Research questions and friction points this paper is trying to address.

Addresses modality shortcut learning in unified multimodal encoders
Enhances robustness under distribution shifts for retrieval
Models structural relationships between composed and unimodal representations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Modality composition awareness framework for robust retrieval
Preference loss enforces multimodal over unimodal embeddings
Composition regularization aligns multimodal with unimodal prototypes
🔎 Similar Papers
No similar papers found.