MetaRank: Task-Aware Metric Selection for Model Transferability Estimation

📅 2025-11-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In transfer learning, the selection of pre-trained model transferability estimation (MTE) metrics is highly task-dependent, yet existing approaches lack an automatic, task-aware mechanism for metric selection. This paper proposes MetaRank, the first framework to formulate MTE metric selection as a learning-to-rank problem. It jointly encodes target dataset descriptions and MTE metrics using a pre-trained language model to construct a unified semantic embedding space, and introduces a meta-predictor trained under listwise ranking loss to learn task-adaptive metric preferences. MetaRank supports offline training and online fast inference. Extensive experiments across 11 pre-trained models and 11 downstream datasets demonstrate that MetaRank significantly outperforms both the average-performance baseline and manually selected metrics, improving the accuracy of transferability estimation and cross-task generalization capability.

Technology Category

Application Category

📝 Abstract
Selecting an appropriate pre-trained source model is a critical, yet computationally expensive, task in transfer learning. Model Transferability Estimation (MTE) methods address this by providing efficient proxy metrics to rank models without full fine-tuning. In practice, the choice of which MTE metric to use is often ad hoc or guided simply by a metric's average historical performance. However, we observe that the effectiveness of MTE metrics is highly task-dependent and no single metric is universally optimal across all target datasets. To address this gap, we introduce MetaRank, a meta-learning framework for automatic, task-aware MTE metric selection. We formulate metric selection as a learning-to-rank problem. Rather than relying on conventional meta-features, MetaRank encodes textual descriptions of both datasets and MTE metrics using a pretrained language model, embedding them into a shared semantic space. A meta-predictor is then trained offline on diverse meta-tasks to learn the intricate relationship between dataset characteristics and metric mechanisms, optimized with a listwise objective that prioritizes correctly ranking the top-performing metrics. During the subsequent online phase, MetaRank efficiently ranks the candidate MTE metrics for a new, unseen target dataset based on its textual description, enabling practitioners to select the most appropriate metric a priori. Extensive experiments across 11 pretrained models and 11 target datasets demonstrate the strong effectiveness of our approach.
Problem

Research questions and friction points this paper is trying to address.

Selecting optimal transfer learning metrics for specific target tasks
Addressing task-dependent effectiveness of model transferability estimation metrics
Automating metric selection using dataset descriptions and meta-learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Meta-learning framework for automatic metric selection
Encodes dataset and metric descriptions using language model
Trains meta-predictor with listwise ranking objective
🔎 Similar Papers
No similar papers found.