Pre-training vs. Fine-tuning: A Reproducibility Study on Dense Retrieval Knowledge Acquisition

📅 2025-05-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the relative roles of pretraining versus fine-tuning in knowledge acquisition for dense retrieval. Methodologically, we conduct systematic experiments across pooling strategies (CLS vs. mean), model architectures (BERT-based encoders vs. LLaMA-based decoders), and benchmark datasets (MSMARCO, Natural Questions). Contrary to the widely held “pretraining-dominance” hypothesis, we find that fine-tuning substantially enhances knowledge representation—especially in Contriever (mean pooling) and LLaMA-based retrievers—whereas in DPR (CLS pooling + encoder), pretraining contributes more dominantly. Further analysis reveals that fine-tuning primarily modulates neuron activation in DPR, but induces semantic space reconstruction in other architectures. We establish a reproducible, cross-architecture, cross-dataset experimental framework, with all code and results publicly released. Our core contribution is challenging the pretraining-centric paradigm by demonstrating that architectural choices and pooling strategies critically govern the knowledge acquisition pathway in dense retrieval.

Technology Category

Application Category

📝 Abstract
Dense retrievers utilize pre-trained backbone language models (e.g., BERT, LLaMA) that are fine-tuned via contrastive learning to perform the task of encoding text into sense representations that can be then compared via a shallow similarity operation, e.g. inner product. Recent research has questioned the role of fine-tuning vs. that of pre-training within dense retrievers, specifically arguing that retrieval knowledge is primarily gained during pre-training, meaning knowledge not acquired during pre-training cannot be sub-sequentially acquired via fine-tuning. We revisit this idea here as the claim was only studied in the context of a BERT-based encoder using DPR as representative dense retriever. We extend the previous analysis by testing other representation approaches (comparing the use of CLS tokens with that of mean pooling), backbone architectures (encoder-only BERT vs. decoder-only LLaMA), and additional datasets (MSMARCO in addition to Natural Questions). Our study confirms that in DPR tuning, pre-trained knowledge underpins retrieval performance, with fine-tuning primarily adjusting neuron activation rather than reorganizing knowledge. However, this pattern does not hold universally, such as in mean-pooled (Contriever) and decoder-based (LLaMA) models. We ensure full reproducibility and make our implementation publicly available at https://github.com/ielab/DenseRetriever-Knowledge-Acquisition.
Problem

Research questions and friction points this paper is trying to address.

Examines pre-training vs fine-tuning in dense retrievers
Assesses retrieval knowledge acquisition in different architectures
Tests impact of representation approaches on performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Utilizes pre-trained BERT and LLaMA models
Compares CLS tokens with mean pooling
Tests encoder-only and decoder-only architectures
🔎 Similar Papers
No similar papers found.