🤖 AI Summary
This study investigates whether dense retrieval models develop a "source bias" toward content generated by large language models (LLMs) during training. Through controlled experiments on parallel human/LLM datasets such as SciFact and NQ320K—spanning unsupervised pretraining, MS MARCO fine-tuning, and LLM-corpus fine-tuning—and complemented by perplexity probing analyses, the work systematically traces the emergence and evolution of this bias across training stages. The findings reveal that unsupervised retrievers exhibit no significant preference for LLM-generated text; MS MARCO fine-tuning generally amplifies such preference; and fine-tuning on LLM-generated corpora induces a pronounced bias. Moreover, perplexity shows only a weak correlation with relevance, challenging the prevailing assumption that low perplexity drives source bias. These results indicate that source bias arises primarily from specific fine-tuning phases rather than inherent model properties.
📝 Abstract
Dense retrieval is a promising approach for acquiring relevant context or world knowledge in open-domain natural language processing tasks and is now widely used in information retrieval applications. However, recent reports claim a broad preference for text generated by large language models (LLMs). This bias is called"source bias", and it has been hypothesized that lower perplexity contributes to this effect. In this study, we revisit this claim by conducting a controlled evaluation to trace the emergence of such preferences across training stages and data sources. Using parallel human- and LLM-generated counterparts of the SciFact and Natural Questions (NQ320K) datasets, we compare unsupervised checkpoints with models fine-tuned using in-domain human text, in-domain LLM-generated text, and MS MARCO. Our results show the following: 1) Unsupervised retrievers do not exhibit a uniform pro-LLM preference. The direction and magnitude depend on the dataset. 2) Across the settings tested, supervised fine-tuning on MS MARCO consistently shifts the rankings toward LLM-generated text. 3) In-domain fine-tuning produces dataset-specific and inconsistent shifts in preference. 4) Fine-tuning on LLM-generated corpora induces a pronounced pro-LLM bias. Finally, a retriever-centric perplexity probe involving the reattachment of a language modeling head to the fine-tuned dense retriever encoder indicates agreement with relevance near chance, thereby weakening the explanatory power of perplexity. Our study demonstrates that source bias is a training-induced phenomenon rather than an inherent property of dense retrievers.