🤖 AI Summary
Existing learning-to-rank (LTR) methods assume documents are displayed at fixed lengths, neglecting how presentation format influences user relevance perception and attention allocation. This work introduces *variable-length ranking*—a novel task that jointly optimizes document ranking and vertical space allocation (i.e., display length)—thereby exposing the intrinsic coupling between ordering and presentation. To address this challenge, we propose VLPL, a listwise model built upon the Plackett–Luce framework, equipped with a novel list-level gradient estimation technique for end-to-end optimization. We validate VLPL via semi-synthetic experiments. Results show that VLPL significantly outperforms fixed-length baselines, achieving superior trade-offs between document exposure and attractiveness across diverse scenarios. Even lightweight length-aware mechanisms yield substantial gains, underscoring the importance of explicit length modeling. This work establishes both theoretical foundations and practical paradigms for advancing LTR from “pure ranking” to “ranking + presentation.”
📝 Abstract
Learning to Rank (LTR) methods generally assume that each document in a top-K ranking is presented in an equal format. However, previous work has shown that users' perceptions of relevance can be changed by varying presentations, i.e., allocating more vertical space to some documents to provide additional textual or image information. Furthermore, presentation length can also redirect attention, as users are more likely to notice longer presentations when scrolling through results. Deciding on the document presentation lengths in a fixed vertical space ranking is an important problem that has not been addressed by existing LTR methods.
We address this gap by introducing the variable presentation length ranking task, where simultaneously the ordering of documents and their presentation length is decided. Despite being a generalization of standard ranking, we show that this setting brings significant new challenges: Firstly, the probability ranking principle no longer applies to this setting, and secondly, the problem cannot be divided into separate ordering and length selection tasks.
We therefore propose VLPL - a new family of Plackett-Luce list-wise gradient estimation methods for the joint optimization of document ordering and lengths. Our semi-synthetic experiments show that VLPL can effectively balance the expected exposure and attractiveness of all documents, achieving the best performance across different ranking settings. Furthermore, we observe that even simple length-aware methods can achieve significant performance improvements over fixed-length models. Altogether, our theoretical and empirical results highlight the importance and difficulties of combining document presentation with LTR.