Modeling Ranking Properties with In-Context Learning

📅 2025-05-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Real-world information retrieval (IR) requires simultaneous optimization of multiple objectives—such as relevance, diversity, and fairness—but conventional IR models rely heavily on supervised fine-tuning, suffering from poor generalizability and limited cross-scenario adaptability. To address this, we propose a zero-shot multi-objective ranking control framework that introduces example-driven in-context learning (ICL) to ranking attribute modeling for the first time. By leveraging carefully crafted ranking examples and multi-objective behavioral prompts, our approach enables dynamic, interpretable, and fine-grained control over group fairness, polarity, and topical diversity—without any parameter updates to the underlying large language model. Extensive experiments across four benchmark datasets—TREC Fairness, Touché, and DL 2019/2020—demonstrate substantial improvements in multi-objective controllability and strong cross-task transferability.

Technology Category

Application Category

📝 Abstract
While standard IR models are mainly designed to optimize relevance, real-world search often needs to balance additional objectives such as diversity and fairness. These objectives depend on inter-document interactions and are commonly addressed using post-hoc heuristics or supervised learning methods, which require task-specific training for each ranking scenario and dataset. In this work, we propose an in-context learning (ICL) approach that eliminates the need for such training. Instead, our method relies on a small number of example rankings that demonstrate the desired trade-offs between objectives for past queries similar to the current input. We evaluate our approach on four IR test collections to investigate multiple auxiliary objectives: group fairness (TREC Fairness), polarity diversity (Touch'e), and topical diversity (TREC Deep Learning 2019/2020). We empirically validate that our method enables control over ranking behavior through demonstration engineering, allowing nuanced behavioral adjustments without explicit optimization.
Problem

Research questions and friction points this paper is trying to address.

Balancing relevance, diversity, and fairness in search rankings
Eliminating task-specific training for ranking scenarios
Controlling ranking behavior through in-context learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses in-context learning for ranking
Eliminates need for task-specific training
Controls ranking via demonstration engineering
🔎 Similar Papers
No similar papers found.