🤖 AI Summary
This work addresses the limited resolution of traditional centrality and ranking methods in high-dimensional or multivariate non-Euclidean spaces, where intrinsic ordering structures are absent. The authors propose a novel centrality notion based on aggregate pairwise proximity comparisons: a point is deemed more central if typical samples from the distribution lie closer to it. By integrating the Bradley–Terry–Luce preference model with data depth for the first time, they establish a generalized centrality framework applicable to arbitrary metric spaces. Scalable finite-sample estimators are developed via convex M-estimation and spectral aggregation, enabling efficient one-dimensional ranking projections. The resulting method achieves both statistical consistency and computational scalability, demonstrating superior ranking stability and discriminative power over classical depth-based approaches across diverse complex data types.
📝 Abstract
Assessing centrality or ranking observations in multivariate or non-Euclidean spaces is challenging because such data lack an intrinsic order and many classical depth notions lose resolution in high-dimensional or structured settings. We propose a preference-based framework that defines centrality through population pairwise proximity comparisons: a point is central if a typical draw from the underlying distribution tends to lie closer to it than to another. This perspective yields a well-defined statistical functional that generalizes data depth to arbitrary metric spaces. To obtain a coherent one-dimensional representation, we study a Bradley-Terry-Luce projection of the induced preferences and develop two finite-sample estimators based on convex M-estimation and spectral aggregation. The resulting procedures are consistent, scalable, and applicable to high-dimensional and non-Euclidean data, and across a range of examples they exhibit stable ranking behavior and improved resolution relative to classical depth-based methods.