🤖 AI Summary
In zero-shot document ranking, pointwise LLM-based methods suffer from inconsistent scoring and suboptimal performance due to their failure to model inter-document comparisons. To address this, we propose the Global Consistency Contrastive Pointwise (GCCP) ranking framework. Its core innovations are: (1) introducing query-aware pseudo-relevant anchor documents as globally comparable references, enabling cross-document comparative perception via prompt engineering for independent scoring; and (2) designing a training-free Post-Aggregation Global Context (PAGC) mechanism that lightweightly integrates global ranking consistency. Evaluated on TREC Deep Learning and BEIR benchmarks, GCCP significantly outperforms existing zero-shot pointwise methods—matching the effectiveness of pairwise and listwise models—while retaining millisecond-level inference latency. To our knowledge, GCCP is the first pointwise approach to efficiently embed global comparative reasoning without sacrificing efficiency.
📝 Abstract
Recent advancements have successfully harnessed the power of Large Language Models (LLMs) for zero-shot document ranking, exploring a variety of prompting strategies. Comparative approaches like pairwise and listwise achieve high effectiveness but are computationally intensive and thus less practical for larger-scale applications. Scoring-based pointwise approaches exhibit superior efficiency by independently and simultaneously generating the relevance scores for each candidate document. However, this independence ignores critical comparative insights between documents, resulting in inconsistent scoring and suboptimal performance. In this paper, we aim to improve the effectiveness of pointwise methods while preserving their efficiency through two key innovations: (1) We propose a novel Global-Consistent Comparative Pointwise Ranking (GCCP) strategy that incorporates global reference comparisons between each candidate and an anchor document to generate contrastive relevance scores. We strategically design the anchor document as a query-focused summary of pseudo-relevant candidates, which serves as an effective reference point by capturing the global context for document comparison. (2) These contrastive relevance scores can be efficiently Post-Aggregated with existing pointwise methods, seamlessly integrating essential Global Context information in a training-free manner (PAGC). Extensive experiments on the TREC DL and BEIR benchmark demonstrate that our approach significantly outperforms previous pointwise methods while maintaining comparable efficiency. Our method also achieves competitive performance against comparative methods that require substantially more computational resources. More analyses further validate the efficacy of our anchor construction strategy.