PRECTR-V2:Unified Relevance-CTR Framework with Cross-User Preference Mining, Exposure Bias Correction, and LLM-Distilled Encoder Optimization

📅 2026-02-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses three key challenges in search systems: the difficulty of modeling cold-start users, generalization shifts caused by exposure bias, and the misalignment between frozen BERT encoders and click-through rate (CTR) objectives. To tackle these issues, the authors propose a unified framework for relevance and CTR prediction. The approach enhances cold-start user representations through cross-user query-level preference transfer, mitigates exposure bias via embedding perturbation and label reconstruction to enable hard negative sampling and pairwise loss, and introduces a lightweight LLM-distilled Transformer encoder tailored for CTR fine-tuning. This design supports end-to-end optimization while adhering to latency constraints. Experimental results demonstrate that the method significantly improves personalized relevance modeling for cold-start users, alleviates distributional shift in the coarse-ranking stage, and outperforms conventional Emb+MLP architectures in CTR performance under strict latency requirements.

Technology Category

Application Category

📝 Abstract
In search systems, effectively coordinating the two core objectives of search relevance matching and click-through rate (CTR) prediction is crucial for discovering users'interests and enhancing platform revenue. In our prior work PRECTR, we proposed a unified framework to integrate these two subtasks,thereby eliminating their inconsistency and leading to mutual benefit.However, our previous work still faces three main challenges. First, low-active users and new users have limited search behavioral data, making it difficult to achieve effective personalized relevance preference modeling. Second, training data for ranking models predominantly come from high-relevance exposures, creating a distribution mismatch with the broader candidate space in coarse-ranking, leading to generalization bias. Third, due to the latency constraint, the original model employs an Emb+MLP architecture with a frozen BERT encoder, which prevents joint optimization and creates misalignment between representation learning and CTR fine-tuning. To solve these issues, we further reinforce our method and propose PRECTR-V2. Specifically, we mitigate the low-activity users'sparse behavior problem by mining global relevance preferences under the specific query, which facilitates effective personalized relevance modeling for cold-start scenarios. Subsequently, we construct hard negative samples through embedding noise injection and relevance label reconstruction, and optimize their relative ranking against positive samples via pairwise loss, thereby correcting exposure bias. Finally, we pretrain a lightweight transformer-based encoder via knowledge distillation from LLM and SFT on the text relevance classification task. This encoder replaces the frozen BERT module, enabling better adaptation to CTR fine-tuning and advancing beyond the traditional Emb+MLP paradigm.
Problem

Research questions and friction points this paper is trying to address.

search relevance
click-through rate prediction
exposure bias
cold-start users
representation learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cross-User Preference Mining
Exposure Bias Correction
LLM-Distilled Encoder
Unified Relevance-CTR Framework
Cold-Start Personalization
🔎 Similar Papers
No similar papers found.