Disentangling Locality and Entropy in Ranking Distillation

📅 2025-05-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Sampling strategies and teacher annotation policies in ranking model training incur increasingly high costs, yet their effectiveness mechanisms lack systematic disentanglement. Method: We theoretically derive and empirically validate—via broad-spectrum ablation studies, model geometric analysis, and dual-dimensional (input/target) evaluation—that sampling locality and teacher model entropy exert orthogonal influences: the former governs model geometry, while the latter controls optimization bias—challenging the universality assumption of hard negative mining. Results: Across MSMARCO and BEIR benchmarks, under diverse state-of-the-art architectures, we demonstrate that simplified sampling combined with low-entropy teacher distillation matches the performance of complex ensemble methods, while substantially reducing training overhead, improving reproducibility, and enhancing computational efficiency.

Technology Category

Application Category

📝 Abstract
The training process of ranking models involves two key data selection decisions: a sampling strategy, and a labeling strategy. Modern ranking systems, especially those for performing semantic search, typically use a ``hard negative'' sampling strategy to identify challenging items using heuristics and a distillation labeling strategy to transfer ranking"knowledge"from a more capable model. In practice, these approaches have grown increasingly expensive and complex, for instance, popular pretrained rankers from SentenceTransformers involve 12 models in an ensemble with data provenance hampering reproducibility. Despite their complexity, modern sampling and labeling strategies have not been fully ablated, leaving the underlying source of effectiveness gains unclear. Thus, to better understand why models improve and potentially reduce the expense of training effective models, we conduct a broad ablation of sampling and distillation processes in neural ranking. We frame and theoretically derive the orthogonal nature of model geometry affected by example selection and the effect of teacher ranking entropy on ranking model optimization, establishing conditions in which data augmentation can effectively improve bias in a ranking model. Empirically, our investigation on established benchmarks and common architectures shows that sampling processes that were once highly effective in contrastive objectives may be spurious or harmful under distillation. We further investigate how data augmentation, in terms of inputs and targets, can affect effectiveness and the intrinsic behavior of models in ranking. Through this work, we aim to encourage more computationally efficient approaches that reduce focus on contrastive pairs and instead directly understand training dynamics under rankings, which better represent real-world settings.
Problem

Research questions and friction points this paper is trying to address.

Analyzing sampling and labeling strategies in ranking models
Exploring efficiency and effectiveness of distillation processes
Investigating data augmentation impact on ranking model behavior
Innovation

Methods, ideas, or system contributions that make the work stand out.

Ablation study on sampling and distillation processes
Theoretical analysis of model geometry and ranking entropy
Investigation of data augmentation impact on ranking models
🔎 Similar Papers
No similar papers found.