🤖 AI Summary
This paper systematically investigates the effectiveness and scalability of large language model (LLM)-based data augmentation for retrieval tasks, particularly under out-of-distribution (OOD) conditions. Adopting a dual-encoder architecture, we conduct over 100 experimental configurations to comparatively evaluate multi-scale generative augmentation strategies, analyzing the impact of LLM size, augmentation volume, and retriever pretraining depth. Key findings are: (1) lightweight LLMs achieve augmentation performance on par with much larger models, substantially improving cost-efficiency; (2) augmentation gains exhibit diminishing marginal returns, with moderate augmentation yielding optimal performance; (3) augmentation benefits are most pronounced for poorly pre-trained retrievers, while strongly pre-trained models show limited improvement; and (4) diverse augmentation enhances OOD generalization. All code and experimental resources are publicly released.
📝 Abstract
Compact dual-encoder models are widely used for retrieval owing to their efficiency and scalability. However, such models often underperform compared to their Large Language Model (LLM)-based retrieval counterparts, likely due to their limited world knowledge. While LLM-based data augmentation has been proposed as a strategy to bridge this performance gap, there is insufficient understanding of its effectiveness and scalability to real-world retrieval problems. Existing research does not systematically explore key factors such as the optimal augmentation scale, the necessity of using large augmentation models, and whether diverse augmentations improve generalization, particularly in out-of-distribution (OOD) settings. This work presents a comprehensive study of the effectiveness of LLM augmentation for retrieval, comprising over 100 distinct experimental settings of retrieval models, augmentation models and augmentation strategies. We find that, while augmentation enhances retrieval performance, its benefits diminish beyond a certain augmentation scale, even with diverse augmentation strategies. Surprisingly, we observe that augmentation with smaller LLMs can achieve performance competitive with larger augmentation models. Moreover, we examine how augmentation effectiveness varies with retrieval model pre-training, revealing that augmentation provides the most benefit to models which are not well pre-trained. Our insights pave the way for more judicious and efficient augmentation strategies, thus enabling informed decisions and maximizing retrieval performance while being more cost-effective. Code and augmented datasets accompanying this work are publicly available at https://aka.ms/DAGR.