🤖 AI Summary
Cross-modal models like CLIP suffer from substantial modality gaps, and existing MLLM-based retrievers rely solely on coarse-grained alignment, limiting fine-grained semantic matching. Method: We propose MAPLE—the first framework to leverage an MLLM as a fine-grained alignment prior to automatically construct image–text preference data, and introduce Relative Preference Alignment (RPA) loss, innovatively adapting the Direct Preference Optimization (DPO) paradigm to cross-modal embedding learning. Crucially, MAPLE avoids reinforcement learning fine-tuning and instead optimizes contrastive learning objectives via preference supervision alone. Contribution/Results: The method preserves model lightweightness while significantly improving fine-grained retrieval performance—especially on tasks requiring subtle semantic discrimination—outperforming unified-architecture MLLM retrievers. MAPLE establishes a novel paradigm for cross-modal representation alignment grounded in preference-based fine-grained supervision.
📝 Abstract
Despite Contrastive Language-Image Pretraining (CLIP)'s remarkable capability to retrieve content across modalities, a substantial modality gap persists in its feature space. Intriguingly, we discover that off-the-shelf MLLMs (Multimodal Large Language Models) demonstrate powerful inherent modality alignment properties. While recent MLLM-based retrievers with unified architectures partially mitigate this gap, their reliance on coarse modality alignment mechanisms fundamentally limits their potential. In this work, We introduce MAPLE (Modality-Aligned Preference Learning for Embeddings), a novel framework that leverages the fine grained alignment priors inherent in MLLM to guide cross modal representation learning. MAPLE formulates the learning process as reinforcement learning with two key components: (1) Automatic preference data construction using off-the-shelf MLLM, and (2) a new Relative Preference Alignment (RPA) loss, which adapts Direct Preference Optimization (DPO) to the embedding learning setting. Experimental results show that our preference-guided alignment achieves substantial gains in fine-grained cross-modal retrieval, underscoring its effectiveness in handling nuanced semantic distinctions.