Supervised Fine-Tuning or Contrastive Learning? Towards Better Multimodal LLM Reranking

📅 2025-10-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the relative effectiveness and underlying mechanisms of supervised fine-tuning (SFT) versus contrastive learning (CL) for large language model (LLM)-based multimodal retrieval reranking. We propose the first unified analytical framework that decomposes both objective functions from dual perspectives—parameter update direction and weight modulation—enabling rigorous theoretical and empirical comparison. Our analysis reveals that SFT achieves superior relevance discrimination alignment in reranking due to its stronger gradient-based weight regulation capability. Through large-scale training, probing analyses, and cross-dataset evaluation on general multimodal retrieval tasks, we systematically benchmark SFT and CL. Experiments demonstrate that SFT consistently outperforms CL, achieving new state-of-the-art performance on the MRB benchmark. This work provides an interpretable theoretical foundation and practical guidance for objective function selection in LLM-based reranking.

Technology Category

Application Category

📝 Abstract
In information retrieval, training reranking models mainly focuses on two types of objectives: metric learning (e.g. contrastive loss to increase the predicted scores on relevant query-document pairs) and classification (binary label prediction of relevance vs. irrelevance). For BERT-style encoders, various studies have shown that contrastive learning (CL) can be more effective than discriminative (classification) learning. However, for large language models (LLMs), classification via supervised fine-tuning (SFT), which predicts''yes''(resp.''no'') token for relevant (resp. irrelevant) pairs, appears more promising as it aligns well with the generative nature of LLMs. This divergence raises a central question: which objective is intrinsically better suited to LLM-based reranking, and what mechanism underlies the difference? In this work, we conduct a comprehensive comparison and analysis between CL and SFT for reranking, taking the universal multimodal retrieval (UMR) as the experimental playground. We first decompose the objectives into two components: weight, which controls the magnitude of those updates, and direction, which guides the model updates, then present a unified framework for understanding their interactions. Through probing experiments, we find that SFT provides a substantially stronger weighting scheme than CL, whereas the preferred scoring direction shows no clear winner. Taken together, these results point to a consistent advantage of SFT over CL for LLM reranking. To further validate our findings, we conduct large-scale training with SFT and present new state-of-the-art rerankers on the MRB benchmark. We also provide ablations on SFT settings and expect our findings to benefit future research and applications in this area.
Problem

Research questions and friction points this paper is trying to address.

Comparing contrastive learning versus supervised fine-tuning for LLM reranking
Analyzing weighting schemes and update directions in multimodal retrieval
Determining optimal training objectives for large language model rerankers
Innovation

Methods, ideas, or system contributions that make the work stand out.

Compares contrastive learning and supervised fine-tuning for reranking
Proposes unified framework analyzing weight and direction components
Identifies SFT's stronger weighting scheme as key advantage
🔎 Similar Papers
No similar papers found.
Z
Ziqi Dai
Harbin Institute of Technology, Shenzhen
X
Xin Zhang
Harbin Institute of Technology, Shenzhen
Mingxin Li
Mingxin Li
Japan Society for the Promotion of Science (JSPS) Research Fellow, The University of Tokyo
Digital twinrenewable energyoperation and maintenance
Y
Yanzhao Zhang
D
Dingkun Long
Pengjun Xie
Pengjun Xie
Alibaba Group
NLP/IR/ML
Meishan Zhang
Meishan Zhang
Associate Professor, Harbin Institute of Technology at Shenzhen
Natural Language ProcessingComputational LinguisticsSyntax ParsingSentiment AnalysisMachine
W
Wenjie Li
The Hong Kong Polytechnic University
M
Min Zhang
Harbin Institute of Technology, Shenzhen