🤖 AI Summary
This paper addresses the problem of predicting the relative performance ranking of entities (e.g., algorithms) in unseen domains using only evaluation results from known domains—thereby avoiding costly empirical re-evaluation. We propose the first evaluation framework specifically designed for cross-domain ranking prediction, featuring leave-one-domain-out cross-validation, rank-consistency metrics, and a multi-strategy meta-evaluation mechanism. To support rigorous validation, we construct a background subtraction benchmark comprising 40 methods across 53 diverse video domains. Extensive experiments on 30 ranking prediction strategies demonstrate significant improvements in cross-domain ranking accuracy, while supporting arbitrary user-defined preferences and generalization across multiple entities and domains. Our core contribution is the establishment of the first reproducible, comparable evaluation paradigm for cross-domain ranking prediction—enabling empirical-free algorithm selection and facilitating principled, domain-agnostic performance forecasting.
📝 Abstract
Frequently, multiple entities (methods, algorithms, procedures, solutions, etc.) can be developed for a common task and applied across various domains that differ in the distribution of scenarios encountered. For example, in computer vision, the input data provided to image analysis methods depend on the type of sensor used, its location, and the scene content. However, a crucial difficulty remains: can we predict which entities will perform best in a new domain based on assessments on known domains, without having to carry out new and costly evaluations? This paper presents an original methodology to address this question, in a leave-one-domain-out fashion, for various application-specific preferences. We illustrate its use with 30 strategies to predict the rankings of 40 entities (unsupervised background subtraction methods) on 53 domains (videos).