RLPO: Residual Listwise Preference Optimization for Long-Context Review Ranking

📅 2026-01-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing approaches to long-context review ranking struggle to balance computational efficiency with effective listwise interaction modeling, limiting their top-k ranking performance. This work proposes RLPO, a novel method that reformulates listwise preference optimization as residual learning in the representation space. Building upon strong pointwise scores provided by a large language model, RLPO employs a lightweight encoder to predict list-level representation residuals, enabling efficient and accurate global context modeling without processing all tokens in the candidate list. We validate our approach on a newly constructed large-scale long-context ranking benchmark, demonstrating significant improvements in NDCG@k and consistent ranking performance even as the candidate list size scales up.

Technology Category

Application Category

📝 Abstract
Review ranking is pivotal in e-commerce for prioritizing diagnostic and authentic feedback from the deluge of user-generated content. While large language models have improved semantic assessment, existing ranking paradigms face a persistent trade-off in long-context settings. Pointwise scoring is efficient but often fails to account for list-level interactions, leading to miscalibrated top-$k$ rankings. Listwise approaches can leverage global context, yet they are computationally expensive and become unstable as candidate lists grow. To address this, we propose Residual Listwise Preference Optimization (RLPO), which formulates ranking as listwise representation-level residual correction over a strong pointwise LLM scorer. RLPO first produces calibrated pointwise scores and item representations, then applies a lightweight encoder over the representations to predict listwise score residuals, avoiding full token-level listwise processing. We also introduce a large-scale benchmark for long-context review ranking with human verification. Experiments show RLPO improves NDCG@k over strong pointwise and listwise baselines and remains robust as list length increases.
Problem

Research questions and friction points this paper is trying to address.

review ranking
long-context
pointwise scoring
listwise optimization
top-k ranking
Innovation

Methods, ideas, or system contributions that make the work stand out.

Residual Listwise Preference Optimization
long-context ranking
listwise ranking
pointwise scoring
large language models
🔎 Similar Papers
No similar papers found.