🤖 AI Summary
In high-stakes ranking scenarios—such as healthcare, education, and hiring—model uncertainty can lead to severe societal consequences. To address this, this paper introduces, for the first time, an abstention mechanism into pairwise learning-to-rank (LTR). We propose a threshold-based abstention strategy grounded in conditional risk estimation, rigorously deriving its theoretical optimality. Furthermore, we design a model-agnostic, plug-and-play abstention-aware ranking framework that integrates plug-in estimation, risk calibration, and pairwise loss optimization. Extensive experiments on standard LTR benchmarks demonstrate that our approach jointly controls abstention rate and ranking accuracy: it significantly enhances decision safety while preserving overall utility. The framework thus provides a trustworthy, controllable, and robust solution for high-stakes ranking applications.
📝 Abstract
Ranking systems influence decision-making in high-stakes domains like health, education, and employment, where they can have substantial economic and social impacts. This makes the integration of safety mechanisms essential. One such mechanism is $ extit{abstention}$, which enables algorithmic decision-making system to defer uncertain or low-confidence decisions to human experts. While abstention have been predominantly explored in the context of classification tasks, its application to other machine learning paradigms remains underexplored. In this paper, we introduce a novel method for abstention in pairwise learning-to-rank tasks. Our approach is based on thresholding the ranker's conditional risk: the system abstains from making a decision when the estimated risk exceeds a predefined threshold. Our contributions are threefold: a theoretical characterization of the optimal abstention strategy, a model-agnostic, plug-in algorithm for constructing abstaining ranking models, and a comprehensive empirical evaluations across multiple datasets, demonstrating the effectiveness of our approach.