An Efficient Framework for Whole-Page Reranking via Single-Modal Supervision

📅 2025-10-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the dual challenges of high annotation cost and performance bottlenecks in full-page re-ranking of search engine results pages (SERPs), this paper proposes SMAR, a cross-modal relevance alignment framework. SMAR leverages strong unimodal rankers to generate supervision signals for page-level multimodal relevance modeling, and introduces intra-modal preference consistency constraints to substantially reduce reliance on costly full-page human annotations. Trained with only a small number of page-level labels, SMAR achieves 70–90% annotation cost reduction on the Qilin and Baidu datasets while significantly outperforming baseline methods in ranking quality. Offline and online A/B tests on the Baidu APP demonstrate consistent improvements across multiple ranking metrics and user experience indicators. The core contribution lies in enabling efficient, low-cost, high-performance SERP optimization via unimodal supervision-driven cross-modal page-level re-ranking.

Technology Category

Application Category

📝 Abstract
The whole-page reranking plays a critical role in shaping the user experience of search engines, which integrates retrieval results from multiple modalities, such as documents, images, videos, and LLM outputs. Existing methods mainly rely on large-scale human-annotated data, which is costly to obtain and time-consuming. This is because whole-page annotation is far more complex than single-modal: it requires assessing the entire result page while accounting for cross-modal relevance differences. Thus, how to improve whole-page reranking performance while reducing annotation costs is still a key challenge in optimizing search engine result pages(SERP). In this paper, we propose SMAR, a novel whole-page reranking framework that leverages strong Single-modal rankers to guide Modal-wise relevance Alignment for effective Reranking, using only limited whole-page annotation to outperform fully-annotated reranking models. Specifically, high-quality single-modal rankers are first trained on data specific to their respective modalities. Then, for each query, we select a subset of their outputs to construct candidate pages and perform human annotation at the page level. Finally, we train the whole-page reranker using these limited annotations and enforcing consistency with single-modal preferences to maintain ranking quality within each modality. Experiments on the Qilin and Baidu datasets demonstrate that SMAR reduces annotation costs by about 70-90% while achieving significant ranking improvements compared to baselines. Further offline and online A/B testing on Baidu APPs also shows notable gains in standard ranking metrics as well as user experience indicators, fully validating the effectiveness and practical value of our approach in real-world search scenarios.
Problem

Research questions and friction points this paper is trying to address.

Reducing annotation costs for whole-page search reranking
Improving ranking performance with limited human annotations
Aligning cross-modal relevance using single-modal rankers
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages single-modal rankers for cross-modal alignment
Uses limited whole-page annotation to reduce costs
Enforces consistency with single-modal preferences during training
🔎 Similar Papers
No similar papers found.
Z
Zishuai Zhang
School of Artifical Intelligence, Beihang University, Beijing, China
S
Sihao Yu
Baidu Inc., Beijing, China
W
Wenyi Xie
Baidu Inc., Beijing, China
Y
Ying Nie
Baidu Inc., Beijing, China
Junfeng Wang
Junfeng Wang
Baidu Inc
SearchLarge Language Model
Z
Zhiming Zheng
School of Artifical Intelligence, Beihang University, Beijing, China
Dawei Yin
Dawei Yin
Senior Director, Head of Search Science at Baidu
Machine LearningWeb MiningData Mining
Hainan Zhang
Hainan Zhang
Beihang University
Dialogue GenerationText GenerationFederated LearningNatural Language Processing