Generative Giants, Retrieval Weaklings: Why do Multimodal Large Language Models Fail at Multimodal Retrieval?

📅 2025-12-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Despite strong generative capabilities, multimodal large language models (MLLMs) suffer significant performance degradation in zero-shot cross-modal retrieval. This paper identifies the root cause as severe text bias in the joint representation space, where visual discriminative information is both sparse and further attenuated by modality alignment mechanisms. We innovatively apply sparse autoencoders (SAEs) to interpretably decompose MLLM intermediate representations,首次 uncovering vision-agnostic interference features that dominate similarity computation. By integrating representation disentanglement analysis with attention visualization, we expose an intrinsic conflict between modality alignment objectives and retrieval discriminability. Empirical validation confirms semantic imbalance as the core failure mechanism in retrieval. Based on this insight, we propose a verifiable improvement pathway—offering both theoretical foundations and technical guidance for developing unified multimodal models that jointly excel at generation and retrieval.

Technology Category

Application Category

📝 Abstract
Despite the remarkable success of multimodal large language models (MLLMs) in generative tasks, we observe that they exhibit a counterintuitive deficiency in the zero-shot multimodal retrieval task. In this work, we investigate the underlying mechanisms that hinder MLLMs from serving as effective retrievers. With the help of sparse autoencoders (SAEs), we decompose MLLM output representations into interpretable semantic concepts to probe their intrinsic behavior. Our analysis reveals that the representation space of MLLMs is overwhelmingly dominated by textual semantics; the visual information essential for multimodal retrieval only constitutes a small portion. This imbalance is compounded by the heavy focus of MLLMs on bridging image-text modalities, which facilitates generation but homogenizes embeddings and finally diminishes the discriminative power required for multimodal retrieval. We further discover that the specific feature components that contribute most to the similarity computations for MLLMs are in fact distractors that actively degrade retrieval performance. Overall, our work provides the first in-depth interpretability analysis of MLLM representations in the context of multimodal retrieval and offers possible directions for enhancing the multimodal retrieval capabilities of MLLMs.
Problem

Research questions and friction points this paper is trying to address.

Investigates why MLLMs underperform in zero-shot multimodal retrieval tasks
Analyzes how MLLM representations prioritize text over visual information for retrieval
Identifies feature components in MLLMs that degrade multimodal retrieval performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decomposing MLLM representations using sparse autoencoders
Revealing textual dominance and visual deficiency in MLLMs
Identifying similarity distractors that degrade retrieval performance
🔎 Similar Papers
No similar papers found.