🤖 AI Summary
This study investigates the relative effectiveness and underlying mechanisms of supervised fine-tuning (SFT) versus contrastive learning (CL) for large language model (LLM)-based multimodal retrieval reranking. We propose the first unified analytical framework that decomposes both objective functions from dual perspectives—parameter update direction and weight modulation—enabling rigorous theoretical and empirical comparison. Our analysis reveals that SFT achieves superior relevance discrimination alignment in reranking due to its stronger gradient-based weight regulation capability. Through large-scale training, probing analyses, and cross-dataset evaluation on general multimodal retrieval tasks, we systematically benchmark SFT and CL. Experiments demonstrate that SFT consistently outperforms CL, achieving new state-of-the-art performance on the MRB benchmark. This work provides an interpretable theoretical foundation and practical guidance for objective function selection in LLM-based reranking.
📝 Abstract
In information retrieval, training reranking models mainly focuses on two types of objectives: metric learning (e.g. contrastive loss to increase the predicted scores on relevant query-document pairs) and classification (binary label prediction of relevance vs. irrelevance). For BERT-style encoders, various studies have shown that contrastive learning (CL) can be more effective than discriminative (classification) learning. However, for large language models (LLMs), classification via supervised fine-tuning (SFT), which predicts''yes''(resp.''no'') token for relevant (resp. irrelevant) pairs, appears more promising as it aligns well with the generative nature of LLMs. This divergence raises a central question: which objective is intrinsically better suited to LLM-based reranking, and what mechanism underlies the difference? In this work, we conduct a comprehensive comparison and analysis between CL and SFT for reranking, taking the universal multimodal retrieval (UMR) as the experimental playground. We first decompose the objectives into two components: weight, which controls the magnitude of those updates, and direction, which guides the model updates, then present a unified framework for understanding their interactions. Through probing experiments, we find that SFT provides a substantially stronger weighting scheme than CL, whereas the preferred scoring direction shows no clear winner. Taken together, these results point to a consistent advantage of SFT over CL for LLM reranking. To further validate our findings, we conduct large-scale training with SFT and present new state-of-the-art rerankers on the MRB benchmark. We also provide ablations on SFT settings and expect our findings to benefit future research and applications in this area.